Quizes (oddities)

I tried a few bird quizzes last night.

Surprised to find that even in easy ones I was expected to be able to identify a number of non-UK species. Perhaps the easier levels of quiz should be restricted to a community selected by the user.

As has been noted by others, some of the quiz photos are missing. Others are of poor quality. Are the photos missing in the orginal posts or are they just now shown in the quiz? It would be a substantial exercise, but eventually productive, for only photos identified by the team as being suitable to be used in quizzes.

Just to see what they were like I tried the expert level a couple of times. On each occasion I was presented with the same photograph for every question although the answer was different each time. Probably a problem for the programmers to sort out.

The plant expert level quizzes are behaving the same; the image for the first question is being repeated on subsequent question, even though the answers are different.

(The first question was reasonable for expert level - pinnules of Athyrium filix-femina, which I was able to identify from the sorus shape and distribution.)

I think the histogram you are shown at the end is the same regardless of the level.

An advanced quiz included an identify a non-existent image question. The multiple choices were easier than the intermediate quiz - the alternatives were not phylogenetically restricted, which gives easy things such as identifying the one moss among three flowering plants.

Plants: level 1 (beginner). All well (I didn’t think they were ‘easy’ but I could do them -until this question, where we have 1 missing image and a plant that I have never encountered (I suspect its S African):

Further along, I got to :

I don’t think any of these are ‘beginners’ plants and I could only do it on the basis that i have a vague idea of what the other three look like.

I suspect the problem might be that non-UK flora is creeping into the quiz, but this isn’t the only issue: I don’t think a beginner is going to be able to tell Solidago canadensis from S gigantea - many books don’t even contain the latter.

I thought despite the issues I was going to get 10/10 (which I would hope to be able to do on beginner level) - but fell at the final hurdle and had to use a lifeline for


—which (you guessed it) is a non-native alpine.

I think Lavatera Guy is right - the scoring doesn’t work properly - aren’t you supposed to get a half point if you use a lifeline to send up at a correct answer?

If you look at the Quiz tab at Your iSpot, it has the correct score, but they’re rounded down on the quiz finish page. (And I suspect that the histogram is based on fake data, rather than real data.)

The multiple choice questions are generally easy for experts and approaching experts (they’re effectively identify the family questions). For your first screen grab I assume that the blank image was the correct answer, as none of the others is, as far as I can tell from the thumbnails, a Solanum (but I can’t identify any from the thumbnails; is one Rhododendron luteum?). For your second screen grab, I have the advantage of knowing that Dorcynium is a legume genus.

A problem is that the iSpot plants category covers 3 different field naturalist communities, vascular plants, mosses and algae, and cross community questions are not easy, except that most people have some knowledge of the flowering plant subset of vascular plants.

With your last question the plant can be identified as a silenoid, but I’m not sure I could distinguish it as Silene rather than Dianthus from that photograph. How do you exploit the hint? After some thought I came up with hitting the Euro+Med database. (Alternatively WikiPedia sometimes gives lists of species for genera.)

I got a Solidago gigantea to identify. It was on the edge of identifiability from the photograph. (In the field I use touch - leaf hairiness - and optical aid - upper stem indumentum - to distinguish the two species.) I agree that it’s too much to ask a beginner to do.

I strongly suspect that quizzes aren’t geographically restricted; some overseas images could sneak in because observation have historically ended up in the wrong community, and others have badly misplaced locations, but there’s too many non-British plants turning up for that to be the case.

Setting up a quiz without domain knowledge (e.g. which groups are critical and which groups are easy) is difficult, but the old iSpot made a decent stab at it. The new one looks as if it’s trying to produce the questions using simple mechanical rules, such as pick 4 taxa from the group. (Pick 4 taxa from one order on the intermediate quizzes.)

The quizzes say “no cheating”. But what is cheating?

Are you allowed to use field guides? When the lifeline is the first and number of letters for the genus and the epithet are you allowed to look up a list of species to find a matching name, or is it only to jog your memory? When you’ve got an image and 4 names are you allowed to do an image search on the names? Ditto with 4 names and an image.

The only thing which is clearly cheating is using Google to find iSpot pages displaying the images.

One could imaging open-book versus closed-book options on the quizzes, with separate scoring.

Easy mammal quizzes are actually approaching easy. I dropped half a point because I had to identify a grey squirrel from a blank image (grey squirrels are one of the few mammals I can actually identify), and had to make a random guess between two blank images on another question.

If they fixed the blank images it should be possibly to regularly score 8 on these. (The other two questions are liable to throw up whales or bats or other difficult to identify species.)

Addendum: Twice in one quiz an image for Homo sapiens was used
This was a suspiciously circular small pool in what appeared to be saltmarsh developed over a stony substrate. I’m quite willing to believe that the pool was anthropogenic, but perhaps not quite suitable for an easy quiz. And on this second quiz I ran into other issues. I was thinking that spoor and spraint images might turn up, even though not really suitable for an easy quiz, and sure enough one did. I managed to identify it as an artiodactyl footprint, which was sufficient. There were also two questions with two different bat species, and a high-arctic seal (Erignathus barbatus) to identify.

Have you ‘heard’ enough mike?
Out of sheer devilment I tried the hardest Levels
I passed on the first one (each time) but then 10 were the same image, all ten with different answers, no matter what I tried, including the correct answer. i couldn’t be bothered to do ten screenshots of the same so what about these?


believe me yet?
And should a badger be the lead question in an Advanced mammal test?

I am now a dab hand at making the quizzes angry
here’s one for Jo

I think @miked, you might consider withdrawing the quizzes, just until they are, well, better maybe?
To be fair, it is very close to being complex and excellent programming BUT, you know and I have to ask, why all this effort and still we cannot edit comments - just one small satisfying step? Please?
And I’d like to see the return of the Community Stars (Social score?)- plenty of very experienced posters without high Icon scores, one or two greatly valued people with no Icons,

Summary of problems

  1. Expert quizzes are broken - they use the image from the first question for all 10 questions.
  2. Blank images are used. (Suspected cause - an observation is selected randomly, without checking that there is an image - but blank images seem rather common for that.)
  3. The score is rounded down on the finish page.
  4. The histogram on the image page doesn’t represent real data. (It might be the total results from the old iSpot, prior to having a decent sample from the new one, but in that case it should be total results by grade and group).

And there are a number of points regarding the difficulty of questions, including

  1. the questions are not geographically restricted, which should be the case at the lower grades at least.
1 Like

Hint exploiting: It gives you S
 q
 S for Silene is a reasonable guess - so if you type that into Scientific Name and follow it with a q
you find there is only one comes up automatically! Cheating a bit though

Another problem, or several. I selected an easy plants quiz, and ticked the personalised quiz button.

Most of the observations on the resulting quiz were birds or invertebrates (hardly related to my use of iSpot). It looks as if the “wrong answers” are taken from my use of iSpot as I got several questions with 3 plants and a correct answer.

But the definite problem is that it used the plants dictionary, so you couldn’t enter a correct name for the enter a name questions (which were a dragonfly and a microlepidopteran).

Thanks for the messages on the quiz, this sounds like another different problem. No idea how programmers could get it so wrong, possibly because the person who produced the original version was from the department that writes this kind of question all the time but the people that rewrote it were general programmers. Also looks as if it was not properly tested before being released, as I mentioned earlier it was not possible for us to test it before release (due to the limitations of the test site e.g. no images in the quiz on test site) but I assume the programmers themselves should have been able to test it.

It certainly looks as if the reimplementation of the quizzes suffered from the lack of input by domain experts. What I’m mildly puzzled about is why the work put into the previous version of the quizzes got lost; changing the underlying platform can require significant amounts of reimplementation, but design features ought to have been carried forwards.

I do recognise that creating quizzes that are not manually curated is a challenging problem. I can see where the process has failed, but inventing a workable algorithm that does the job is far from trivial. (The simplest rule I can see is to restrict eligible observations by distance and taxa by relative frequency, with the distance threshold increasing and the frequency threshold decreasing, as you go up through the grades. Add in a minimum taxonomic distance for the alternatives in the multiple choice questions, and you might have something workable. Restriction by community alone doesn’t work because of the global community, but adding that as a further constraint on eligibility stops people from Kent being baffled by non-British species occurring in Flanders and Picardy.)

I particularly liked the whale dressed up in a badger costume. You can tell its a whale underneath though - and it still looks a bit damp.

Yes, me too. Though it’s easy to spot. But I can’t tell one plant from another


i have a lot of wishes for 2018


don’t worry - we all find Sphagnum difficult :slight_smile:

Only just come across this thread, and what fun it has been. I haven’t tried a quiz myself, but the opportunities for getting a good laugh might tempt me. Though, of course, there is actually a good reason to remove them until the programming is sorted. And as dejay said, the return of the ‘edit comment’ would be a worthwhile action if it is not too difficult a patch.

OK, so I couldn’t resist a level 1 plants quiz.
I was asked to identify
A Californian Sea lion,
A Californian salsmander,
A snake-necked turtle, which was the species highlighted,
A fantail warbler,
A South American bird, Sierra Finch,
A marine British fish, a rockling.
I got 4 plant question, one of which wasn’t what it said it was.
I submitted some ‘Dispute this answer’ responses.
Overall a good laugh, but hardly a level 1 PLANT quiz.

I take it that you ticked the customise quiz button. If you don’t do that you (did) get a halfway sensible quiz.

ENOUGH @miked ?