ZooCon 2015 (Part 1/2)

That’s right. I have finally managed to shackle the first half of my ZooCon thoughts to the metaphorical mast before they fly away. Hold on tight now for the first of two posts. But first I should admit that I attribute a percentage of my excitement about ZooCon to the fact that it took place on Keble Road, Oxford. Yes, Keble Road. The road named after, and adjoining, the college at which I studied as an undergraduate. I don’t need any excuses to return to Keble but, since ZooCon gave me one on a plate, I instantly booked to stay there the night before. I tell you this only because you cannot even begin to imagine my face, a cocktail of disbelief and glee, when the girl on reception presented me with the key to the very same room that I lived in for two years. What a way to start my ZooCon weekend. A little backwards time travel before I was catapulted into the future the next day.

Becky Smethurst kicked us off with a talk of two halves. First she reported a live Citizen Science project from earlier this year on the BBC’s Star Gazing Live (i.e. classification against the clock!) to find supernovae. From what I understand, when a star explodes and forms a supernova, for a very brief period of time it shines with the same brightness before it diminishes; the apparent brightness of a supernova, as a proportion of their full brightness, therefore indicates how far away they are thus how much the universe has expanded, and thus the age of the universe. To capture how bright a supernova is at its birth would require action on a large scale in a limited period of time. For three consecutive nights, Brian Cox asked viewers of Stargazing Live to trawl through 100,000 images taken by the SkyMapper telescope in Australia and classify them before the sun set in Chile, where a telescope would be turned to focus on those found. More than 40,000 volunteers provided almost 2 million classifications for the images over the three days to find four, ultimately confirmed, supernovae, one of which was a Type 1A, which scientists use to age the universe. I was impressed enough that volunteers found one new supernova, but even more impressive is that this one data point resulted in the calculation of the age of the universe to within 200 million years of the agreed answer (14.03 billion years)! That, my friends, is the power of the crowd. Not just the power of Brian Cox. It also raised the question of how we communicate the science that results from classifications and how we can improve this. The iMars team is already concerned about how we will manage and give feedback to volunteers, for example how we are going to administer discussion board posts. Second, she told a story about how she had taken ten photos of Orion’s belt on a windy evening in Hawaii (a word that always grabs the attention of my ears), each ten seconds apart. She then used open source software to stack the photos and produce one image showing the movement of the sky in that time. BBC viewers used the same process and sent in 790 photos which, stacked together, produced the image in Figure 1.

Figure 1: Orion. As captured by BBC Star Gazing Live viewers.

Figure 1: Orion. As captured by BBC Star Gazing Live viewers.

 

I don’t know about you but this image stole my breath. You can read all the details and download it for yourself here.

Second, Alexandra Swanson talked about Snapshot Serengeti, a personal favourite project for which volunteers identified animals in 1.2 million images taken by 225 camera traps across a 1,125 km squared of the Serengeti National Park in Kenya, to improve our understanding of the migration and interaction of its 1.6 million wildebeest and zebra. The project published its first paper recently, which got an unusually high amount of publicity; this is atypical of Citizen Science projects, which tend to receive more publicity for their launch than their results. Their paper swam against this tide to report what happened to the 10.8 million classifications that over 28,000 registered and around 40,000 unregistered volunteers contributed. It was fascinating to hear how she’d used the project’s data to explore the certainty of individual classifications and, more specifically, how many classifications were needed to be certain that any one classification was correct for different species. Because I’m a nerd and we have similar decisions to make on our project, I was interested to note their criteria for removing images from circulation:

  1. five consecutive “nothing here” classifications;
  2. ten non-consecutive “nothing here” classifications;
  3. ten matching (consecutive or non-consecutive) classifications of species or species combination;
  4. 25 total classifications (whether in agreement or not).

For each image, the team applied a plurality algorithm to produce a “consensus dataset”. Next, for each species, the team calculated the level of agreement between classifications, or “evenness score”; a score of 0 indicated total agreement (i.e. a species that was relatively easy to identify) and a score of 1 indicates total disagreement between classifications (i.e. a species that was relatively easy to identify). Finally, five experts produced 4,428 classifications from a subset of 4,149 images, which could be compared against the combined intelligence of the crowd. Remarkably, 97% of the volunteers’ species classifications, and 90% of their species counts, matched those of the experts. This data further allowed them to delve deeper into the 3% that were incorrect and find that there were two main types of errors: 1) false negatives, when volunteers missed an animal that was there, and 2) false positives, the flip side of the same coin, where volunteers classify something that isn’t there (which is relatively common for rare species such as rhinos that people get excited about). The analysis found that species with a high rate for one type of error, they had a low rate for the other. The interesting implication of this is the potential for dynamic removal of images from projects to help manage their increasing volume; researchers might consider some images to be considered correctly classified after only three matching classifications whereas other species may be more difficult to identify and require more matching classifications i.e. how many pairs of eyes do we need to look at an image? As an extension of this, researchers may also require different levels of classification accuracy, depending on the question they are using the data to answer. It’s worth pointing out that Greg Hines has also explored the weighting of classifications, not only by species, but by individual accuracy and quantified the increase in certainty obtained with each individual classification. Figure 2 shows the graph from this paper that illustrates that rarer species require more eyes for the same level of certainty.

Figure 2:  The number of users required to classify a photo for an accurate result increases according to the level of disagreement between their classifications (Hines et al, 2015).

Figure 1: The number of users required to classify a photo for an accurate result increases according to the level of disagreement between their classifications.

It sparked a conversation in the ensuing break between me and my project mate about how we will measure performance in our project and what we can use to assess the accuracy of classifications. As I said in my last post, the team at UCL’s Mullard Space Science Laboratory is developing an algorithm at the same time that we are developing the Citizen Science platform for iMars. Because both techniques are new we start with the hypothesis that the algorithm will successfully filter for images that show geological change, so that volunteers can do the more fun and interesting job of defining the change, which it is more challenging to code into the algorithm. How we will determine the successful performance of either, however, is still to be determined itself!

After the break Victoria Van Hyning presented the wide range of Humanities projects the Zooniverse team is working on. The main challenge that her work addresses is that computers cannot read handwriting and there aren’t enough experts in the world to create the data to train a computer and garner such rich insights into life/the world before print. She gave us a tour of her current and future plans.

  • Operation War Diary: 1.5 million pages of war diaries, which detail daily events on the front line, the decisions that were made and activities that resulted from them. The key learning of this project was not to make the tasks too complex.
  • Science Gossip: This is a collaboration between ConSciCom (an Arts and Humanities Research Council project investigating the role of naturalists and ‘amateur’ science enthusiasts in the making and communication of science in both the Victorian period and today) and the Missouri Botanical Garden who provide material from the Biodiversity Heritage Library, a digital catalogue of millions of pages of printed text between the 1400s and today related to the investigation of the natural world. Since March 2015, 4,413 volunteers have tagged illustrations and added artist and engraver information to 92,660 individual pages through this website; this will help historians identify why, how often, and who made images depicting a whole range of natural sciences in the Victorian period. This project has the potential for the development of graphical search functionality for these catalogues.
  • Ancient Lives: over 1.5 million transcriptions of the Oxyrhynchus Papyri. The neatest thing on this project is the Greek character keyboard (Figure 3), which users can become proficient without any expertise.
Figure 3: The Greek Character keyboard of the Ancient Lives project.

Figure 3: The Greek Character keyboard of the Ancient Lives project.

Future projects include:

  • Diagnosis London: a future project with the Wellcome Trust and University of Leicester, which will again go back to Citizen Science of the 19th Century and look at public health records.
  • Anno.Tate: a project with Tate UK to transcribe artists’ notebooks, diaries etc.
  • Shakespeare’s World: in collaboration with the Folger Shakespeare Library in Washington DC, seeks volunteers to transcribe records of life when Shakespeare was alive.
  • New Bedford Whaling Logs: a spin off from Old Weather that targets climate data and social history.
She is getting excited that, over the next 12 months, the new web-based Zooniverse platform, Panoptes, is going to be adapted for the Humanities to enable a granular approach to classification, which will facilitate: different levels of participation (so that volunteers can dip in an out of the same document without having to abandon it completely); mitigation of fatigue, and; construction of algorithms. Volunteers to will also be able to create what she called crib sheets of letters and/or words as their own personal reference, which has the potential to improve their consistency and proficiency and combine the crowd’s knowledge to alleviate any common misclassifications. Linguists are interested in taking this data further to examine the evolution of spelling and punctuation, which is often inconsistent. Audio and visual classifications are also something the Humanities arena might explore, with applications within domains such as archaeology.
The second half of the day was no less interesting so I intend to write it up separately in due course. (I haven’t got to the penguins yet!) Until then I would love to hear your thoughts, or recommendations for reading, on any of the issues I have highlighted in bold. They all represent elephants that have been stamping around in my head since I started out on this project, so I was very grateful to hear them aired at ZooCon and intend to remain mindful of them as my project progresses.

 

Advertisements

One thought on “ZooCon 2015 (Part 1/2)

  1. Pingback: ZooCon 2015 (Part 2/2) | The GeographiGal

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s