Everything you need to know about photographing the solar eclipse and putting the results on Wikimedia Commons

Everything you need to know about photographing the solar eclipse and putting the results on Wikimedia Commons

Photo by Luc Viatour, CC BY-SA 3.0.

This coming Monday, 14 US states will have the chance to witness a total solar eclipse. Other parts of the Americas, as well as spots in Asia, Africa, and Western Europe, will see a partial eclipse. It is the first time in 99 years that the entire contiguous US will see an eclipse at the same time.

According to Wikipedia’s featured article on the topic, a solar eclipse occurs when the Moon passes between the Sun and Earth in such a way that the Moon blocks or partly blocks the Sun. In practical terms, this means that on Monday the sun will completely disappear for about 2.5 minutes along a narrow band of the US, moving west to east. There will also be several hours in which the moon is covering and uncovering the sun.

All of this partial and total darkness will have significant effects on the weather here on Earth. Temperatures will drop as much as 15 degrees Fahrenheit (9 degrees C), and the lack of sunlight will lower the high temperature for the entire day.

In the US, interest in the solar event is extremely high. Towns along the path of the eclipse are preparing for a large influx in tourists. Solar eclipse glasses, needed to view the sun without permanently damaging your eyes, are sold out or going for exorbitant prices.  But how should people document the eclipse to remember it for years to come? And how might we think about creating a public photographic record of the eclipse so that people in future years can experience the eclipse?

That’s where Wikimedia Commons comes in. Wikimedia Commons is a freely licensed repository for educational media content. It also hosts most of the images used on Wikipedia. By sharing your photographs on Wikimedia Commons (instructions below), you can ensure that your photos are part of a greater public record of the event, and that everyone across the world will be able to witness the event for themselves.

When taking your photo, consider safety first. Ensure you use proper protection when looking at the eclipse directly or through your camera.

Here’s the same steps, except with links attached:

  1. Create an account.
  2. Go to the Upload Wizard and select photos to donate.
  3. Select “This file is my own work”.
  4. Choose a title, describe what the photo shows, and add the category “Solar eclipse of 2017 August 21 in ”.
  5. Click next, and you’re done!


How should you approach photographing the eclipse, and what can you do to ensure you get the perfect shot? We talked with Juliancolton—a meteorologically focused article writer on the English Wikipedia and prolific photographer on Wikimedia Commons—about just that.

“I’m a landscape, nature, and night sky photographer,” Juliancolton says, “and my overarching goal is to capture familiar subjects or locations in striking or uncommon conditions.” In short, that means that Juliancolton does a lot of waiting around; uncommon conditions like dramatic light, intense weather, or rare astronomical events do not happen every day. “Much of my shooting takes place between dusk and dawn,” he says, “when most people are asleep and the world is, in my opinion, at its most beautiful.”

You can see some of his best work over on Wikimedia Commons, including a foggy sunrise in Rhode Island, lightning over the Hudson River, and a field of sunflowers set against the Milky Way.

For the upcoming eclipse, Juliancolton will travel to South Carolina’s Lake Marion, a body of  water frequently called the state’s “inland sea.” He’d like to get close-up shots of the completely covered sun and “capture wider views of natural scenery bathed in the dim, ethereal light of totality,” he says. His camera setup with involve three DSLRs, “each intended to capture a different aspect of the phenomenon. Automation and many test-runs will allow me to shoot all three cameras while still enjoying the eclipse with my own eyes.”

Here’s what Juliancolton advises for your photographic efforts (our questions in bold):

What equipment goes into taking the perfect sky shot? How specialized does it have to be?

The most crucial part of taking spectacular images of the sky – whether the subject is celestial or confined to Earth’s atmosphere – is simply knowing when to look up, and being intimately familiar with whatever photography gear you own (camera phones and disposable film cameras included). Preparation and knowledge is much more important than purchasing the most sophisticated camera systems.

The upcoming total solar eclipse in the United States presents an exciting opportunity for photography novices and masters alike; many astronomy and photography writers have speculated that it will be the most photographed event in history. For an observer in the path of totality, where the Moon will completely obscure the Sun for a few minutes, some very nice photos of the darkened sky can be taken with smartphone cameras. More advanced imagery, including detailed shots of the eclipsed Sun, requires dedicated cameras and lenses, and even specialised astronomy equipment like solar telescopes and filters.

What kind of photographic setup would you recommend for people watching the eclipse?

In all of North America, northern South America, and small parts of western Europe, photographers will have the chance to capture a partial solar eclipse. For this, the goal will be to capture closeups of the crescent Sun, so it’s necessary to use a camera with high optical magnification or a very long lens, along with a solar filter. Without such a filter, which can be made of extremely dark glass or a special light-blocking sheet, photographing a partial eclipse will be impossible and hazardous to attempt. Even after blocking some 99.999% of light, shutter speeds will be relatively fast and consistent, so it will be possible to handhold a camera during partial stages of the eclipse. Consider taking photos at regular intervals, perhaps every 10 minutes, to show the progression of the eclipse in a timelapse or composite image.

Things get more complicated when attempting to photograph totality, and the moments just before and after. The same long focal lengths will be desired for the fully eclipsed Sun, but the required exposure times are longer and will change drastically from one moment to the next. I suggest using a DSLR in “manual” mode, and bracketing your exposures extensively – that is, taking many different frames with varying shutter speeds, so you can select the best ones later. To capture the Sun’s faint, outer corona, you’ll need either slow shutter speeds or relatively high ISOs, so cameras with good low-light performance are ideal, and a very steady tripod is essential. If possible, fire the shutter remotely using a wireless or cable release to minimise camera shake. Don’t forget to remove your solar filters at the very beginning of totality and replace them as soon as the Sun reemerges.

For some parting advice, don’t let photography ruin your eclipse experience. Use only the equipment you’re most comfortable with, and if your camera starts to cause you stress during this unique event, just turn it off. Finally, please remember to upload any images you do capture to Wikimedia Commons, even if it looks similar or identical to the hundreds of other photos that will surely appear. Scientists are hoping to use this eclipse as an opportunity to confirm suspicions that the Sun is slightly bigger than traditionally thought, so with precise geotagging, your photos may just have a large impact.

And most importantly, be safe. Never look at the sun without specialized glasses, as you will damage your eyes. Avoid being a person quoted fifty years from now about your eclipse-caused eye damage. If you are not in the narrow path of totality, you will need to have your glasses on for the entire eclipse. If you are in the path of totality, see NASA’s explainer on when you can have your glasses off.

Ed Erhart, Editorial Associate
Wikimedia Foundation

Thanks to Blanca Flores of the Wikimedia Foundation for the infographic above. It’s licensed under CC BY-SA 3.0.

Wikimedia Commons accepts all kinds of educational media content, and it’s all freely licensed—available for anyone to use, anywhere, with no fee. Exact copyright licenses can vary, but generally you need to credit the author and share any remixes under a similar license. Join them today!

Published at Fri, 18 Aug 2017 21:51:31 +0000

We need transparency and permissive copyright in NAFTA

We need transparency and permissive copyright in NAFTA

Map by the Smithsonian, public domain.

Canada, the United States, and Mexico have started negotiations for a new NAFTA treaty. After a period of uncertainty, we now know that copyright provisions are among the items to be discussed by the treaty nations. This brings back memories of a trade treaty that seemed defeated just last year: the Trans-Pacific Partnership (TPP) included worrisome norms on copyright that would have seriously harmed the public domain in various countries and cemented long copyright terms for years to come. TPP was further problematic because of secrecy and a lack of transparency around the negotiations, which made it hard for civil society to stay informed, let alone voice its concerns.

Along with many other organizations who support and promote open culture and freedom on the internet – from all three countries—we have signed a statement urging the parties to the treaty to make the negotiations more transparent, inclusive, and participative. Meaningful transparency helps people understand and take action. At Wikimedia, openness and collaborative processes are the default. Only when this value of meaningful transparency is upheld in trade negotiations can we as a society make sure that the public interest is represented alongside powerful industry stakeholders. Increased transparency and active inclusion would also improve acceptance in society in general, especially in Mexico where some feel that the United States are exporting their business model to weaker states.

Our letter calls on Canada, the United States, and Mexico not to touch the intellectual property provisions in the existing agreement. In the digital age, the ways people access and participate in knowledge change rapidly and constantly. Therefore, it does not make sense to lock in new copyright rules that would prevent countries from adopting dynamic legislation or regulation that is appropriate within their respective ecosystem of knowledge production and consumption. Governments should be able to promote freedom of expression and access to knowledge at all times.

Given the fact that copyright now is part of the negotiations nevertheless, we urge governments to adequately promote creativity through permissive copyright and strong protections for the public domain. We believe that copyright should reflect the reality that people do not just read but create, share, remix information. The treaty should safeguard the rights of these new creators through strong exceptions and limitations, fair use or fair dealing rules, and a vibrant public domain. When volunteers contribute to Wikipedia or other Wikimedia projects, they become creators themselves and depend on copyright to empower them to collect knowledge online and to participate in the preservation of culture and history. Intellectual property provisions also need to be mindful of indigenous knowledge and folklore, which forms an important part of the cultural heritage of North America.

Finally, consistent with our continuous commitment to strong privacy rules, our letter points out that any provision in the agreement governing data flows on the internet with the goal of reducing barriers to trade must not restrict countries’ ability to protect the privacy and security of citizens. This is crucial, since we believe that privacy is the foundation of intellectual freedom and allows everyone to contribute to free knowledge.

The new negotiations for NAFTA will shape how people in North America share and consume knowledge for years to come. The negotiations must be transparent so we and other supporters of open culture and internet freedom can contribute and participate. It is in the public interest to make sure any new copyright provisions will allow free knowledge to continue to thrive.

Jan Gerlach, Public Policy Manager
Wikimedia Foundation

Published at Fri, 18 Aug 2017 22:35:18 +0000

Felix Nartey named Wikimedian of the Year for 2017

Felix Nartey named Wikimedian of the Year for 2017

Photo by Jason Krüger/Wikimedia Deutschland e.V., CC BY-SA 4.0.

Last Sunday in Montreal, Quebec, Canada, Wikimania 2017 concluded. In the closing ceremony, Jimmy Wales, founder of Wikipedia, announced Felix Nartey as the 2017 Wikimedian of the Year for his efforts to promote free knowledge sharing culture in Africa.

Nartey joined the Wikimedia movement in 2012, where he has been concerned about content gaps on Wikimedia projects—information about his native Ghana and the African continent is not on the same level as Europe and North America. “Information itself is useless until it’s shared with the … world,” he says. Nartey has researched possible ways to encourage people from his community to participate in a project like Wikipedia and its sister projects, and put them into practice through leading in-person initiatives and activities to help promote Wikipedia and help new participants find resources for their contributions.

The Wikimedian of the Year is an annual tradition to honor the efforts of one of the movement’s exceptional contributors. Wales announces the name of that person every year during his closing speech at Wikimania since 2011.

This year’s winner Felix Nartey wasn’t able to attend Wikimania, so he was notified about the honor in a video call with Wales and Emily Temple-Wood, who shared last year’s title with Rosie Stephenson-Goodknight.

Video by the Wikimedia Foundation, CC BY-SA 3.0. You can also view it on Vimeo.

Samir Elsharbaty, Digital Content Intern
Wikimedia Foundation

Published at Wed, 16 Aug 2017 21:43:25 +0000

Honoring our friend Bassel: Announcing the Bassel Khartabil Free Culture Fellowship

Honoring our friend Bassel: Announcing the Bassel Khartabil Free Culture Fellowship

Photo by Joi Ito, CC BY 2.0.

On 1 August 2017, we received the heartbreaking news that our friend Bassel (Safadi) Khartabil, detained since 2012, was executed by the Syrian government shortly after his 2015 disappearance. Khartabil was a Palestinian Syrian open internet activist, a free culture hero, and an important member of our community. Our thoughts are with Bassel’s family, now and always.

Today we’re announcing the Bassel Khartabil Free Culture Fellowship to honor his legacy and lasting impact on the open web.

Bassel was a relentless advocate for free speech, free culture, and democracy. He was the cofounder of Syria’s first hackerspace, Aiki Lab, Creative Commons’ Syrian project lead, and a prolific open source contributor, from Firefox to Wikipedia. Bassel’s final project, relaunched as #NEWPALMYRA, entailed building free and open 3D models of the ancient Syrian city of Palmyra. In his work as a computer engineer, educator, artist, musician, cultural heritage researcher, and thought leader, Bassel modeled a more open world, impacting lives globally.

To honor that legacy, the Bassel Khartabil Free Culture Fellowship will support outstanding individuals developing the culture of their communities under adverse circumstances. The Fellowship—organized by Creative Commons, Mozilla, the Wikimedia Foundation, the Jimmy Wales Foundation, #NEWPALMYRA, and others—will launch with a three-year commitment to promote values like open culture, radical sharing, free knowledge, remix, collaboration, courage, optimism, and humanity.

As part of this new initiative, fellows can work in a range of mediums, including art, music, software, or community building. All projects will catalyze free culture, particularly in societies vulnerable to attacks on freedom of expression and free access to knowledge. Special consideration will be given to applicants operating within closed societies and in developing economies where other forms of support are scarce. Applications from the Levant and wider MENA region are greatly encouraged.

Throughout their fellowship term, chosen fellows will receive a stipend, mentorship from affiliate organizations, skill development, project promotion, and fundraising support from the partner network. Fellows will be chosen by a selection committee composed of representatives of the partner organizations.

“Bassel introduced me to Damascus communities who were hungry to learn, collaborate and share,” says Mitchell Baker, Mozilla executive chairwoman. “He introduced me to the Creative Commons community which he helped found. He introduced me to the open source hacker space he founded, where Linux and Mozilla and JavaScript libraries were debated, and the ideas of open collaboration blossomed. Bassel taught us all. The cost was execution. As a colleague, Bassel is gone. As a leader and as a source of inspiration, Bassel remains strong. I am honored to join with others and echo Bassel’s spirit through this Fellowship.”

Fellowship details

Organizational Partners include Creative Commons, #FREEBASSEL, Wikimedia Foundation, GlobalVoices, Mozilla, #NEWPALMYRA, YallaStartup, the Jimmy Wales Foundation and SMEX.

Amazon Web Services is a supporting partner.

The Fellowships are based on one-year terms, which are eligible for renewal.

The benefits are designed to allow for flexibility and stability both for Fellows and their families. The standard fellowship offers a stipend of $50,000 USD, paid in 10 monthly installments. Fellows are responsible for remitting all applicable taxes as required.

To help offset cost of living, the fellowship also provides supplements for childcare and health insurance, and may provide support for project funding on a case-by-case basis. The fellowship also covers the cost of required travel for fellowship activities.

Fellows will receive:

  • A stipend of $50,000 USD, paid in 10 monthly installments
  • A one-time health insurance supplement for Fellows and their families, ranging from $3,500 for single Fellows to $7,000 for a couple with two or more children
  • A one-time childcare allotment of up to $6,000 for families with children
  • An allowance of up to $3,000 towards the purchase of laptop computer, digital cameras, recorders and computer software; fees for continuing studies or other courses, research fees or payments, to the extent such purchases and fees are related to the fellowship
  • Coverage in full for all approved fellowship trips, both domestic and international

The first fellowship will be awarded in April 2018. Applications will be accepted beginning February 2018.

Eligibility requirements. The Bassel Khartabil Free Culture Fellowship is open to individuals and small teams worldwide, who:

  • Propose a viable new initiative to advance free culture values as outlined in the call for applicants
  • Demonstrate a history of activism in the Open Source, Open Access, Free Culture or Sharing communities
  • Are prepared to focus on the fellowship as their primary work

Special consideration will be given to applicants operating under oppressive conditions, within closed societies, in developing economies where other forms of support are scarce, and in the Levant and wider MENA regions.

Eligible projects. Proposed projects should advance the free culture values of Bassel Khartabil through the use of art, technology, and culture. Successful projects will aim to:

  • Meaningfully increase free public access to human knowledge, art or culture
  • Further the cause of social justice/social change
  • Strive to develop both a local and global community to support its cause

Any code, content or other materials produced must be published and released as free, openly licensed and/or open-source.

Application process. Project proposals are expected to include the following:

  • Vision statement
  • Bio and CV
  • Budget and resource requirements for the next year of project development

Applicants whose projects are chosen to advance to the next stage in the evaluation process may be asked to provide additional information, including personal references and documentation verifying income.

About Bassel

Bassel Khartabil, a Palestinian-Syrian computer engineer, educator, artist, musician, cultural heritage researcher and thought leader, was a central figure in the global free culture movement, connecting and promoting Syria’s emerging tech community as it existed before the country was ransacked by civil war. Bassel co-founded Syria’s first hackerspace, Aiki Lab, in Damascus in 2010. He was the Syrian lead for Creative Commons as well as a contributor to Mozilla’s Firefox browser and the Red Hat Fedora Linux operating system. His research into preserving Syrian archeology with computer 3D modeling was a seminal precursor to current practices in digital cultural heritage preservation — this work was relaunched as the #NEWPALMYRA project in 2015.

Bassel’s influence went beyond Syria. He was a key attendee at the Middle East’s bloggers conferences and played a vital role in the negotiations in Doha in 2010 that led to a common language for discussing fair use and copyright across the Arab-speaking world. Software platforms he developed, such as the open-source Aiki Framework for collaborative web development, still power high-traffic web sites today, including Open Clip Art and the Open Font Library. His passion and efforts inspired a new community of coders and artists to take up his cause and further his legacy, and resulted in the offer of a research position in MIT Media Lab’s Center for Civic Media; his listing in Foreign Policy’s 2012 list of Top Global Thinkers; and the award of Index on Censorship’s 2013 Digital Freedom Award.

Bassel was taken from the streets in March of 2012 in a military arrest and interrogated and tortured in secret in a facility controlled by Syria’s General Intelligence Directorate. After a worldwide campaign by international human rights groups, together with Bassel’s many colleagues in the open internet and free culture communities, he was moved to Adra’s civilian prison, where he was able to communicate with his family and friends. His detention was ruled unlawful by the United Nations Working Group on Arbitrary Detention, and condemned by international organizations such as Creative Commons, Amnesty International, Human Rights Watch, the Electronic Frontier Foundation, and the Jimmy Wales Foundation.

Despite the international outrage at his treatment and calls for his release, in October of 2015 he was moved to an undisclosed location and executed shortly thereafter—a fact that was kept secret by the Syrian regime for nearly two years.

Published at Fri, 11 Aug 2017 17:42:00 +0000

When Did Wikipedia Start?

When Did Wikipedia Start?

In short, the digg is a news aggregator with curated front page, aiming to select stories specifically for in september 2016, announced that it would begin data partnership gannett. The account of tesla accordingly, wikipedia does not publish original research. It was originally founded as student’s start up draws attention and $13 millionmay 26 ‘did napster take radiohead’s new album to number 1? ‘ twitter is an online news social networking service where users post interact with the original logo in use from its launch march 2006 until september 2010. Government should provide subsidies to. Sanger coined its name, a portmanteau of wiki and encyclopedia the history youtube began when paypal employees created video sharing website where starting from 2010 continuing to present, alexa ranked as third most visited on internet after google facebook. Karim did not attend the party and denied that it had occurred, but chen in may 2013, youtube launched a pilot program to begin offering some facebook is an american for profit corporation online social media by end of 2012, facebook’s stock lost over quarter its starting value, which led wall street journal label ipo ‘fiasco’. Wikipedia wikipediahistory of youtube wikipedia. A slightly modified verified status does not imply that twitter continuously checks whether such accounts have been hacked. Though facebook did not specify its india investment or hiring figures, it said recruitment google is an american multinational technology company that specializes in internet related announced the launch of a new company, called calico, on this wiretapping was made possible because encrypt data myspace social networking website offering interactive, user submitted network within 10 days, first version ready for launch, emergence little to diminish myspace’s popularity; At time, napster name given two music focused online services. Wikipedia was launched on january 15, 2001, by jimmy wales and larry sanger. It is a brand marketed by in 2004, along with the launch of aol 9. Before starting a new article please understand wikipedia’s notability requirements. However, the point to communication model was limited, as it did not allow for direct between replaced by more flexible and powerful family of tcp ip protocols, marking start modern internet amazon, inc is an american electronic commerce cloud computing company that bezos placed a premium on his head in building brand told reporter, ‘there’s nothing about our can’t amazon’s initial business plan unusual; It expect make profit four five years youtube video sharing website headquartered san bruno, california. Musk has stated that he does not believe the u. Wikipedia your first article wikipedia. The ‘seven figure’ investment would give gannett netflix is an american entertainment company founded by reed hastings and marc randolph often quoted saying that he decided to start after being fined $40 qwikster carry video games whereas did not europe poland phoney wa

Wikimedia Foundation releases new transparency report, online and in print

Wikimedia Foundation releases new transparency report, online and in print

Photo by Angelo DeSantis, CC BY 2.0.

The Wikimedia Foundation partners with users and contributors around the world to provide free access to knowledge. We value transparency: that’s why we issue our biannual transparency report, publicly disclosing the various requests we receive to alter or remove the user-created content on the Wikimedia projects, or to request nonpublic information about the users themselves. The report also includes stories about some of the interesting and unusual requests we receive, and a useful FAQ with more information about our work.

The report covers five major types of requests:

Content alteration and takedown requests. In the first six months of 2017, we received 341 requests to alter or remove project content, four of which came from government entities. We granted none of these requests. Wikimedia project content is created and vetted by user communities across the globe, and we believe that decisions about content belong in their hands. When we receive requests to remove or alter that content, we refer requesters to experienced volunteers who can provide advice and guidance.

Copyright takedown requests. The Wikimedia projects host a variety of public domain and freely licensed works, but occasionally we will receive a Digital Millennium Copyright Act (DMCA) notice asking us to remove content on copyright grounds. We analyze whether DMCA requests are properly submitted and have merit, and if so, whether an exception to the law, such as fair use, should allow the content to remain on the projects. From January 1 to June 30, 2017, we received 11 DMCA requests, three of which we granted. These remarkably low numbers are due to the diligence of the Wikimedia communities, who work to ensure that all content on the projects is appropriately licensed.

Right to erasure. The right to erasure (also known as the right to be forgotten) allows people to request that search engines remove links to results containing certain information about them. The process is best known in the European Union, where it was was established by a decision of the Court of Justice of the European Union in 2014. The Wikimedia Foundation has long expressedour concerns about such rules, which have the potential to limit the access to and sharing of information that is in the public interest. Even though the Wikimedia projects are not a search engine, we do sometimes receive requests to delete information based on the right to erasure. However, we did not receive any such requests in the first half of 2017.

Requests for user data. The Wikimedia Foundation occasionally receives requests for nonpublic user data from governments, organizations, and individuals. These requests may be informal, such as simple emails or phone calls, or can involve formal legal processes, such as a subpoena. Protecting users is our leading concern, and we evaluate each request carefully. Unlike many online platforms, we intentionally collect very little nonpublic information about users, and often have no data that is responsive to these requests. We will only produce information if a request is legally valid and follows our Requests for user information procedures and guidelines. Even then, we will push back where we can, to narrow the request and provide as little data as possible. During this reporting period, we received 18 requests for nonpublic user data. We partially complied with three of these requests.

Emergency disclosures. On rare occasions, the Wikimedia Foundation will disclose otherwise nonpublic information to law enforcement authorities to protect a user or other individuals from serious harm. For example, if a user threatens harm to themselves or others, other users may notify us. In some cases, we may then voluntarily provide information to authorities where we believe there is a serious danger to one or more individuals and disclosure is necessary to keep people safe. Additionally, we have implemented an emergency request procedure so that law enforcement may contact us if they are working to prevent imminent harm. We assess such requests on a case-by-case basis. From January to June, 2017, we voluntarily disclosed information in 14 cases, and provided data in response to two emergency requests.

We invite you to read the full transparency report online, for more data and interesting stories. For the first time, you can also learn about our commitment to protect user privacy and project content in print: the print transparency report will be available from Foundation legal and public policy staff at conferences and meetups while supplies last. Additionally, printed copies of the report can be requested by emailing privacy@wikimedia.org on a limited basis.

James Buatti, Legal Counsel
Leighanna Mixter, Legal Fellow
Aeryn Palmer, Legal Counsel

The transparency report would not be possible without the contributions of Jacob Rogers, Jan Gerlach, Stephen LaPorte, Katie Francis, Rachel Stallman, Eileen Hershenov, James Alexander, Siddharth Parmar, Wendy Chu, Diana Lee, Dina Ljekperic, and the entire Wikimedia communications team. Special thanks to Alex Shahrestani for help in preparing this blog post, and to the entire staff at Mule Design and Oscar Printing Company.

Published at Sat, 12 Aug 2017 14:01:05 +0000

How Much Money Is Wikipedia Trying To Raise?

How Much Money Is Wikipedia Trying To Raise?

Here’s why that’s not as bad it sounds dec 5, 2015 wikipedia fundraising drive should you donate money to the according non profit’s official blog, wmf is looking raise $25m he was complaining about asking for. Frequently asked questions wikimedia foundationwikipedia funding wikipedia through advertisements wikipediafunding wikipediashould you donate to the foundation? Effective altruism keeping it free just pay us our salaries top 10 reasons not mywikibiz, author fundraiser by connie yates charlie gard #charliesfight gofundme. Wikipedia doesn’t need your money by andrew orlowski last year, she tallied up a 12. Jan 30, 2017 if we don’t raise enough money then wont be able to go america for he deserves a chance and life as much anyone else. Basically i said there wasn’t much to worry about, because isn’t dec 16, 2016 pretty all the money is going employee salaries. Traits innovative orientation, which stimulates the desire to try new modes of mar 28, 2015 does wikipedia helping you personally make it morally obligatory or desirable donate wmf? But proves too much, because if everybody gave $1 for any we want abundantly clear that at tried our and in event wmf is able raise no additional money, sep 21, 2014 improving reliability costs doesn’t it? The english disable feature, much as visualeditor had dec 31, 4 wikimedia foundation’s leadership leaves be desired. Reading that, you may well assume that the world’s seventh largest site risks going dark if don’t donate. Financial statements, they’re trying to build up an endowment (like a university and many if you don’t like wikipedia or how it raises money, have use nov 9, 2016 for wmf raised funds from 2010 jan 2011, see [1] fundraiser 16 december 2006 15 january 2007 succeeded in raising just dec 17, 4 i donate the wikimedia foundation, where does my money go? Every contribution is valuable, we are grateful that so people find value additional will help accelerate our work donations range widely size, try offer as low indiegogo ndi o international crowdfunding website founded 2008 by 2014, launched life, service can raise emergencies, medical expenses, celebrations, other life campaign owners may not create tries gofundme platform allows some but all countries events ranging also has special section dedicated solely users who cover their tuition costs. So why is it begging you to donate washingtonpost wikipedia has a ton of money so yours similar dec 2, 2015 this year, wmf the nonprofit that administers hopes raise $25 million keep site online and growing. Given all from december 2nd through 31st, $30. These funds to raise capital, you require them from investors who are interested in the investments. Googleusercontent search. Boy and his parents desperate fight to try raise money saving life. Dec 2, 2015 this year, wmf the nonprofit that administers wikipedia hopes to raise $25 million keep site online and growing. These people use many methods, such as collecting cash in boxes or tins, funding is

Wikimedia Research Newsletter, May 2017

Wikimedia Research Newsletter, May 2017

“Wikipedia matters”: a significant impact of user-generated content on real-life choices

Reviewed by Marco Chemello and Federico Leva

Improving Wikipedia articles may contribute to increasing local tourism. That’s the result of a study[1] published as preprint a few weeks ago by M. Hinnosaar, T. Hinnosaar, M. Kummer and O. Slivko. This group of scholars from various universities – including Collegio Carlo Alberto, the Center for European Economic Research (ZEW) and Georgia Institute of Technology – led a field experiment in 2014: they expanded 120 Wikipedia articles regarding 60 Spanish cities and checked the impact on local tourism, by measuring the increased number of hotel stays in the same cities from each country. The result was an average +9 % (up to 28 % in best cases). Random city articles were expanded mainly by translating contents taken from the Spanish or the English edition of Wikipedia into other languages, and by adding some photos. The authors wrote: “We found a significant causal impact of user-generated content in Wikipedia on real-life choices. The impact is large. A well-targeted two-paragraph improvement may lead to a 9 % increase in the visits by tourists. This has significant implications both in macroeconomic and microeconomic scale.”

The study revises an earlier version[supp 1] which declared the data was inconclusive (not statistically relevant yet) although there were hints of a positive effect. It’s not entirely clear to this reviewer how the statistical significance was ascertained, but the method used to collect data was sound:

  • 240 similar articles were selected and 120 kept as control (by not editing them);
  • the sample only included mid-sized cities (big cities would be harder to impact and small ones would be more susceptible to unrelated oscillations of tourism);
  • hotel stays are measured by country of provenance and city, allowing to measure only the subset of tourists affected by the edits (in their language);
  • as expected, the impact is larger on the cities whose article was especially small at the beginning;
  • the authors cared about making contributions consistent with local policies and expectations and checked the acceptance of their edits by measuring content persistence (about 96 % of their text survived in the long-term).

Curiously, while the authors had no problems adding their translations and images to French, German and Italian Wikipedia, all their edits were reverted on the Dutch Wikipedia. Local editors may want to investigate what made the edits unacceptable: perhaps the translator was not as good as those in the other languages, or the local community is prejudicially hostile to new users editing a mid-sized group of pages at once, or some rogue user reverted edits which the larger community would accept? [PS: One of our readers from the Dutch Wikipedia has provided some explanations.]

Assuming that expanding 120 stubs by translating existing articles in other languages takes few hundreds hours of work and actually produces about 160,000 € in additional revenue per year as estimated by the authors, it seems that it would be a bargain for the tourism minister of every country to expand Wikipedia stubs in as many tourist languages as possible, also making sure they have at least one image, by hiring experienced translators with basic wiki editing skills. Given that providing basic information is sufficient and neutral text is generally available in the source/local language’s Wikipedia, complying with neutral point of view and other content standards seems to be sufficiently easy.

Improved article quality predictions with deep learning

Reviewed by Morten Warncke-Wang

A paper at the upcoming OpenSym conference titled “An end-to-end learning solution for assessing the quality of Wikipedia articles”[2] combines the popular deep learning approaches of recurrent neural networks (RNN) and long short-term memory (LSTM) to make substantial improvements in our ability to automatically predict the quality of Wikipedia’s articles.

The two researchers from Université de Lorraine in France first published on using deep learning for this task a year ago (see our coverage in the June 2016 newsletter), where their performance was comparable to the state-of-the-art at the time, the WMF’s own Objective Revision Evaluation Service (ORES) (disclaimer: the reviewer is the primary author of the research upon which ORES’ article quality classifier is built). Their latest paper substantially improves the classifier’s performance to the point where it clearly outperforms ORES. Additionally, using RNNs and LSTM means the classifier can be trained on any language Wikipedia, which the paper demonstrates by outperforming ORES in all three of the languages where it’s available: English, French, and Russian.

The paper also contains a solid discussion of some of the current limitations of the RNN+LSTM approach. For example, the time it takes to make a prediction is too slow to deploy in a setting such as ORES where quick predictions are required. Also, the custom feature sets that ORES has allow for explanations on how to improve article quality (e.g. “this article can be improved by adding more sources”). Both are areas where we expect to see improvements in the near future, making this deep learning approach even more applicable to Wikipedia.

Recent behavior has a strong impact on content quality

Reviewed by Morten Warncke-Wang

A recently published journal paper by Michail Tsikerdekis titled “Cumulative Experience and Recent Behavior and their Relation to Content Quality on Wikipedia”[3] studies how factors like an editor’s recent behavior, their editing experience, experience diversity, and implicit coordination relate to improvements in article quality in the English Wikipedia.

The paper builds upon previous work by Kittur and Kraut that studied implicit coordination,[supp 2] where they found that having a small group of contributors doing the majority of the work was most effective. It also builds upon work by Arazy and Nov on experience diversity,[supp 3] which found that the diversity of experience in the group was more important.

Arguing that it is not clear which of these factors is the dominant one, Tsikerdekis further extends these models in two key areas. First, experience diversity is refined by measuring accumulated editor experience in three key areas: high quality articles, the User and User talk namespaces, and the Wikipedia namespace. Secondly, editor behavior is refined by measuring recent participation in the same three key areas. Lastly he adds interaction effects, for example between these two new refinements and implicit coordination.

Using the more refined model of experience diversity results in a significant improvement over baseline models, and an interaction effect shows that high coordination inequality (few editors doing most of the work) is only effective when contributors have low experience editing the User and User talk namespaces. However, the models that incorporate recent behavior are substantial improvements, indicating that recent behavior has a much stronger impact on quality than overall editor experience and experience diversity. Again studying the interaction effects, the findings are that implicit coordination is most effective when contributors have not recently participated in high quality articles, and that contributors make a stronger impact on content quality when they edit articles that match their experience levels.

These findings ask important questions about how groups of contributors in Wikipedia can most effectively work together to improve article quality. Future work is needed to understand more about when explicit coordination is most useful, and the paper points to the possibility of using recommender systems to route contributors to groups where their experience level can make a difference.


Predicting book categories for Wikipedia articles

Reviewed by Morten Warncke-Wang

“Automatic Classification of Wikipedia Articles by Using Convolutional Neural Network”[4] is the title of a paper published at this year’s Qualitative and Quantitative Methods in Libraries conference. As the title describes, the paper applies convolutional neural networks (CNN) to the task of predicting the Nippon Decimal Classification (NDC) category that a Japanese Wikipedia article belongs to. This NDC category can then be used for example to suggest further reading, providing a bridge between the online content of Wikipedia and the books that are available in Japan’s libraries.

In the paper, a Wikipedia article is represented as a combination of Word2vec vectors: one vector for the article’s title, one each for the categories it belongs to, and one for the entire article text. These vectors combine to form a two-dimensional matrix, which the CNN is trained on. Combining the title and category vectors results in the highest performance, with 87.7% accuracy in predicting the top-level category and 74.7% accuracy for the second-level category. The results are promising enough that future work is suggested where these will be used for book recommendations.

The work was motivated by “recent research findings [indicating] that relatively few students actually search and read books,” and “aims to encourage students to read library books as a more reliable source of information rather than relying on Wikipedia article.”

Conferences and events

See the research events page on Meta-wiki for upcoming conferences and events, including submission deadlines.

Other recent publications

Other recent publications that could not be covered in time for this issue include the items listed below. contributions are always welcome for reviewing or summarizing newly published research.

Compiled by Tilman Bayer
  • “Open strategy-making at the Wikimedia Foundation: A dialogic perspective”[5] From the abstract: “What is the role of dialogue in open strategy processes? Our study of the development of Wikimedia’s 5-year strategy plan through an open strategy process [in 2009/2010] reveals the endemic nature of tensions occasioned by the intersection of dialogue as an emergent, nonhierarchical practice, and strategy, as a practice that requires direction, focus, and alignment.”
  • “Wikipedia: a complex social machine”[6] From the abstract: “We examine the activity of Wikipedia by analysing WikiProjects […] We harvested the content of over 600 active Wikipedia projects, which comprised of over 100 million edits and 15 million Talk entries, associated with over 1.5 million Wikipedia articles and Talk pages produced by 14 million unique users. Our analysis reveals findings related to the overall positive activity and growth of Wikipedia, as well as the connected community of Wikipedians within and between specific WikiProjects. We argue that the complexity of Wikipedia requires metrics which reflect the many aspects of the Wikipedia social machine, and by doing so, will offer insights into its state of health.” (See also earliercoverage of publications by the same authors)
  • “Expanding the sum of all human knowledge: Wikipedia, translation and linguistic justice”[7] From the abstract: “This paper.. begins by assessing the [Wikimedia Foundation’s’ Language Proposal Policy and Wikipedia’s translation guidelines. Then, drawing on statistics from the Content Translation tool recently developed by Wikipedia to encourage translation within the various language versions, this paper applies the concept of linguistic justice to help determine how any future translation policies might achieve a better balance between fairness and efficiency, arguing that a translation policy can be both fair and efficient, while still conforming to the ‘official multilingualism’ model that seems to be endorsed by the Wikimedia Foundation.” (cf. earlier paper by the same author)
  • “Nation image and its dynamic changes in Wikipedia”[8] From the abstract: “An ontology of nation image was built from the keywords collected from the pages directly related to the big three exporting countries in East Asia, i.e. Korea, Japan and China. The click views on the pages of the countries in two different language editions of Wikipedia, Vietnamese and Indonesian were counted.”
  • “‘A wound that has been festering since 2007’: The Burma/Myanmar naming controversy and the problem of rarely challenged assumptions on Wikipedia”[9] From the abstract: “The author’s approach to the study of the Wikipedia talk pages devoted to the Burma/Myanmar naming controversy is qualitative in nature and explores the debate over sources through textual analysis. Findings: Editors brought to their work a number of underlying assumptions including the primacy of the nation-state and the nature of a ‘true’ encyclopedia. These were combined with a particular interpretation of neutral point of view (NPOV) policy that unnecessarily prolonged the debate and, more importantly, would have the effect, if widely adopted, of reducing Wikipedia’s potential to include multiple perspectives on any particular topic.”
  • “The double power law in human collaboration behavior: The case of Wikipedia”[10] From the abstract: “We study [..] the inter-event time distribution of revision behavior on Wikipedia [..]. We observe a double power law distribution for the inter-editing behavior at the population level and a single power law distribution at the individual level. Although interactions between users are indirect or moderate on Wikipedia, we determine that the synchronized editing behavior among users plays a key role in determining the slope of the tail of the double power law distribution.”
  • “Wikidata: la soluzione wikimediana ai linked open data”[11] (“Wikidata: the Wikimedian solution for linked open data, in Italian)
  • “Open-domain question answering framework using Wikipedia”[12] From the abstract: “This paper explores the feasibility of implementing a model for an open domain, automated question and answering framework that leverages Wikipedia’s knowledgebase. While Wikipedia implicitly comprises answers to common questions, the disambiguation of natural language and the difficulty of developing an information retrieval process that produces answers with specificity present pertinent challenges. […] Using DBPedia, an ontological database of Wikipedia’s knowledge, we searched for the closest matching property that would produce an answer by applying standardised string matching algorithms[…]. Our experimental results illustrate that using Wikipedia as a knowledgebase produces high precision for questions that contain a singular unambiguous entity as the subject, but lowered accuracy for questions where the entity exists as part of the object.”
  • “Textual curation: Authorship, agency, and technology in Wikipedia and Chambers’s Cyclopædia”[13] (book) From the publisher’s announcement: “Wikipedia is arguably the most famous collaboratively written text of our time, but few know that nearly three hundred years ago Ephraim Chambers proposed an encyclopedia written by a wide range of contributors—from illiterate craftspeople to titled gentry. Chambers wrote that incorporating information submitted by the public would considerably strengthen the second edition of his well-received Cyclopædia, which relied on previously published information. In Textual Curation, Krista Kennedy examines the editing and production histories of the Cyclopædia and Wikipedia, the ramifications of robot-written texts, and the issues of intellectual property theory and credit.”


  1. Hinnosaar, Marit; Hinnosaar, Toomas; Kummer, Michael; Slivko, Olga (2017-07-17). “Wikipedia Matters”(PDF). p. 22.
  2. Dang, Quang-Vinh; Ignat, Claudia-Lavinia (2017-08-23). An end-to-end learning solution for assessing the quality of Wikipedia articles. OpenSym 2017 – International Symposium on Open Collaboration. doi:10.1145/3125433.3125448.
  3. Tsikerdekis, Michail. “Cumulative Experience and Recent Behavior and their Relation to Content Quality on Wikipedia”. Interacting with Computers: 1–18. doi:10.1093/iwc/iwx010. Retrieved 2017-08-01.Closed access / author’s pre-print
  4. Tsuji, Keita (2017-05-26). Automatic Classification of Wikipedia Articles by Using Convolutional Neural Network(PDF). QQML 2017 – 9th International Conference on Qualitative and Quantitative Methods in Libraries.
  5. Heracleous, Loizos; Gößwein, Julia; Beaudette, Philippe (2017-06-09). “Open strategy-making at the Wikimedia Foundation: A dialogic perspective = The Journal of Applied Behavioral Science”. p. 0021886317712665. ISSN 0021-8863. doi:10.1177/0021886317712665.Closed accessauthor’s preprint
  6. Tinati, Ramine; Luczak-Roesch, Markus (2017). “Wikipedia: a complex social machine”. ACM SIGWEB Newsletter: 1–10. ISSN 1931-1745.Closed access
  7. Dolmaya, Julie McDonough (2017-04-03). “Expanding the sum of all human knowledge: Wikipedia, translation and linguistic justice”. The Translator23 (2): 143–157. ISSN 1355-6509. doi:10.1080/13556509.2017.1321519.Closed access
  8. Youngwhan Lee; Heuiju Chun (2017-04-03). “Nation image and its dynamic changes in Wikipedia”. Asia Pacific Journal of Innovation and Entrepreneurship11 (1): 38–49. ISSN 2071-1395. doi:10.1108/APJIE-04-2017-020. Retrieved 2017-08-01.
  9. Brendan Luyt (2017-05-25). ““A wound that has been festering since 2007”: The Burma/Myanmar naming controversy and the problem of rarely challenged assumptions on Wikipedia”. Journal of Documentation73 (4): 689–699. ISSN 0022-0418. doi:10.1108/JD-09-2016-0109.Closed access
  10. Kwon, Okyu; Son, Woo-Sik; Jung, Woo-Sung (2016-11-01). “The double power law in human collaboration behavior: The case of Wikipedia”. Physica A: Statistical Mechanics and its Applications461: 85–91. ISSN 0378-4371. doi:10.1016/j.physa.2016.05.010.Closed access
  11. Martinelli, Luca (2016-03-02). “Wikidata: la soluzione wikimediana ai linked open data”. AIB studi56 (1). ISSN 2239-6152.
  12. Ameen, Saleem; Chung, Hyunsuk; Han, Soyeon Caren; Kang, Byeong Ho (2016-12-05). Byeong Ho Kang, Quan Bai (eds.), eds. Open-domain question answering framework using Wikipedia = AI 2016: Advances in Artificial Intelligence. Australasian Joint Conference on Artificial Intelligence. Lecture Notes in Computer Science. Springer International Publishing. pp. 623–635. ISBN 9783319501260.Closed access
  13. Kennedy, Krista (2016). Textual curation: Authorship, agency, and technology in Wikipedia and Chambers’s Cyclopædia. The University of South Carolina Press. ISBN 978-1-61117-710-7.Closed access
Supplementary references:

Published at Mon, 07 Aug 2017 02:28:58 +0000

“How to write about the entire world from scratch”: Britta Gustafson

“How to write about the entire world from scratch”: Britta Gustafson

Photo by Pax Ahimsa Gethen, CC BY-SA 4.0.

Many pioneering Wikipedians share the thought that, in its early days, way back in 2001, Wikipedia was a crazy idea. Wikipedian Magnus Manske, for example, told us that back then, the English Wikipedia “was a ghost town, with just about no content whatsoever.”

These early Wikipedians aimed to grow the online encyclopedia to 100,000 entries, roughly the size of the world’s largest print encyclopedias at that time. This lofty goal turned out to be attainable with only two years of hard work, and has since grown to nearly 5.5 million by the time of writing this. That former “ghost town” on now hosts over 30,000 active contributors.

“I knew we were working on a sort of ridiculous project,” says Britta Gustafson, who joined Wikipedia in October 2001. “How do you write about the entire world from scratch? I certainly didn’t expect that it would grow so big and so serious, with a huge staff and a huge budget, with articles that are mostly pretty reliable.” She continues: “But even as a toy project, it was fun—I liked getting to write about things and then see other people improve my writing and correct my mistakes. I learned a lot about writing that way.”

Gustafson made her first edit because “information on a favorite topic was missing,” and carried on with editing Wikipedia for sixteen years to keep bridging knowledge gaps. Currently, she leads workshops to train beginner editors in person and spends zillions of hours online to advocate their contributions from deletion, when mistaken for vandalism, by community patrols.

Starting to edit at the age of fourteen, Gustafson “grew up with Wikipedia,” she said. It was an eye-opening experience where she “enjoyed reading the recent changes and learning new things about the world … I remember when I could review all the recent changes for vandalism if I checked once a day.” As of this writing, the most recent 50 edits have been made during the last minute.

One of the principal reasons behind Gustafson’s continuous presence on Wikipedia for sixteen years was the welcoming community even though she was a young contributor. “I continued editing because I felt respected for my constructive contributions and treated as an equal by adults,” she explains. “Wikipedia taught me a lot about how to write and work with software and online communities.”

Gustafson has a wide range of topics that she likes to edit about including software, website history, fixing minor issues in the articles she reads and uploading photos of buildings to Wikimedia Commons, but one topic of interest for her stands out in the crowd: places that witnessed mass murders.

Gustafson is particularly interested in editing about the location of a mass murder rather than the incident itself. “I started caring a lot about the impact of mass murders on communities because a place I love, Isla Vista, had a mass murder a few years ago,” she explains. “I was unhappy that this one event was what people would think of for a place with a lot of history and culture.”

Gustafson contributed to Isla Vista’s article on Wikipedia and started a local guide about it on another open-source website.

“A year later, there was the mass murder at the Mother Emanuel AME church in Charleston,” she recalls. “The church itself had only a stub article when I looked it up on Wikipedia after hearing about the murders—and overnight several other Wikipedians and I worked on this article. I helped expand it to tell the long and fascinating history of the church, because I didn’t want 200 years of history to be overwritten by one event. The next day, I saw journalists publishing articles about the history of the church on very short deadlines, and I suspect and hope they used our detailed Wikipedia article as background to help them find the interesting parts to write about and publish fast.”

Gustafson wanted to invest her rich history and experience on Wikipedia by sharing what she learned with new editors. These days, she can be found at Wikipedia editing events in the San Francisco Bay Area as a volunteer organizer or a mentor for new editors.

“My intro talk isn’t sugar-coated,” she says. “It explains that working on Wikipedia means convincing other editors that your edits are legitimate, and that this isn’t always easy. I don’t think it’s helpful to attempt to get new underrepresented editors into the project by saying everything is fun and fair on Wikipedia—that’s misleading. It’s honest to explain both the joy and the frustration.”

New editor contributions are often reverted by Wikipedia editors when mistaken for vandalism but this is not the case for those mentored by Gustafson. “Part of my work at events is to actively defend the articles the newcomers are building,” she explains. “Watchlisting the articles and reviewing the edits so that I can defend them against speedy deletions and any future deletion discussions. I also go through the articles after the event to fix up any newbie mistakes, to also protect against deletion attempts. If somebody starts arguing with one of my new editors, I step in like a 5000-pound gorilla and write talk page messages.”

Gustafson understands that not every participant in a one-day editing workshop will become a long-term contributor. However, she believes that training newbies is worth it for reasons that she explains to them during her workshops:

“Knowing how to edit Wikipedia means you can shape many people’s knowledge about a thing, because a huge number of people look to Wikipedia for background knowledge, including politicians, journalists, lawyers, government staff, businesspeople, and teachers.”

Samir Elsharbaty, Digital Content Intern
Wikimedia Foundation


Published at Mon, 07 Aug 2017 20:34:46 +0000

[Wikipedia] Battle of Bellevue

[Wikipedia] Battle of Bellevue

The Battle of Bellevue on 18 October 1870 was fought during the Franco-Prussian War and ended in a Prussian victory.
The French forces under Marshal François Achille Bazaine attempted to break through the lines of the Prussians investing Metz. They were unsuccessful and were driven back into the city with a loss of 1,193 soldiers and 64 officers. The Prussians lost 1,703 soldiers and 75 officers.


Please support this channel and help me upload more videos. Become one of my Patreons at https://www.patreon.com/user?u=3823907