iForum LIVE!: AI and iSpot: a spotlight on the PlantNet API

[quote=“miked, post:58, topic:1713”]
Seaweeds

I missed that comment and have responded here https://www.ispotnature.org/communities/uk-and-ireland/view/project/852902/

CHECK YOUR OWN (I have 63!)
Open EXPLORE COMMUNITY (Filter)
Observations | Gallery | UK and Ireland | Page 1 | iSpot Nature and add a few clicks
Click Filter community observations


.
@ajoly erratic?
https://www.ispotnature.org/communities/uk-and-ireland/view/observation/852843/ my comment
Is someone on the Development team READING this stuff?

I am intrigued by Automated Identification systems, and Plantnet especially (parceque il est français).

From comments here it seems that iSpot is hosting a version of Plantnet that is not the same as its other versions they have been using.

  • The Ispot version lacks a the % likelihood feature; that is unfortunate.
  • It says of all its suggested AIs “(not in the ispot dictionary)”. Which is untrue as several I have checked are, albeit some are not in the UK dictionary. (On the other hand this may be seen as a useful deterrent so posters will not try to add it as their ID, knowing it is not in the dictionary).
  • It seems to take no note of location. While some plants may be arriving in the UK for example, because of climate change, it seems a bit premature to offer a tropical species in a UK outdoor setting.

Which begs the question – What is Plantnet offering us?

If the purpose is to try to improve AI of plants globally then the trial here is not, as several comments have noted, as yet showing useful results as far as we can see.

And finally, the project website has this about us:

iSpot is a citizen observatory on biodiversity. The platform encompasses a network of over 68,000 global nature observers who have crowdsourced the identification of 30,000 taxa, through over 1,500,000 images of more than 750,000 observations of different species.

This also from the website an interview with Tom August, a computational ecologist at the UK Centre for Ecology & Hydrology (UKCEH) who works at the Biological Records Centre:

I think that Pl@ntNet navigates that interaction really well, both the Pl@ntNet app and the API. There are for me the two key elements of a good AI in these spaces: that they are honest and tell you the various things that could be and their scores; and that they give you the chance to falsify.

I think that is what we iSpotters were expecting. I read the whole interview; I think it is close to what some of us hope Automated Identification is about.

1 Like

JoC most of the points you make were addressed during the session.

For example it is up to us whether or not we want to show the percentage, intially it was shown then decided to switch it off but could switch it on again if that would be better.

As to the location then again that could be taken into account as Alexis said. This is more difficult as it is often the garden plants that are the most tricky to identify and where the AI may be most helpful. I suggested a two stage process first try UK species then global but not sure if this would be possible as you would probably need to show both sets of results.

The bigger questions about exactly how it is used i.e. does it give you a result immediately or after some time, can it be switched on/off, how does it affect the reputation system if you choose an ID after seeing the results etc have not been resolved as there seems to be a divergance of views and we want to see what people come up with.

Thanks Mike, it’s good see our comments being taken on board.

One of the recent ispot observations mentioned that they had used a fungal AI system to suggest a name. On the front page of that system it says "So please explore with curiosity and sanity. " along with various other things. Danmarks officielle database for svampefund

They were warning about using AI to identify fungi to eat but I suspect the quote could also be used for any instance of using an AI system.

I do use Plantnet in much the same way as you do. The investigation process though is pleasurable and hopefully makes things stick.:slight_smile:

Re your comments on Plantnet panel obtrusiveness, percentages etc: agree entirely, think I said that myself some time ago.

I can’t see the point of carrying on the engagement with this topic when the site is barely functioning at present.

Edited some time later: should have said “my” engagement instead of “the”.

That sounds v negative I know, but that’s what innumerable 403 forbiddens do!

I, too, am getting 403 for everything I click on from the Changes Tracker. I reckon a hostile power has hacked iSpot as part of their campaign to undermine Western civilization. (Just a conspiracy theory!)

1 Like

I enjoyed my quick test @miked


I’m thinking that we need to raise spirits and hopes a little. The quality of recent posting is good.
And we still need show our sense of humour, treat it all quite lightly and impress everyone with our attention to detail and critical appraising (in observations)

Some constraints on the capabilities of plant identification apps.

I reckon that there are around 500,000 living species of vascular plants; possibly appreciably more. There are around 350,000 species known to science. Pl@ntNet’s global theme has shy of 35,000 species, so clearly it will be unable to identify 90% of plant species, though it’s still has a potential breadth an order of magnitude better than myself (my life list for wild plants is about 1,500 species, and perhaps the various garden plants I can recognise that I haven’t seen wild might bring my capability up to say 3,500 (or I may be flattering myself).

However Pl@ntNet’s Western Europe theme has 9,000 species, which is well in excess of the size of the flora of any single Western European nation, and may be approaching completeness for the combined Western European flora, so potentially it could perform well on wild plants from Western Europe, including Britain and Ireland. However it is possible that the Western European theme is biased towards France (Pl@ntNet is based in France), and this may affect the results.

To identify plants I use not only morphological features that can be seen in photographs, but also size, habitat, range, phenology, and with caution, smell, taste and touch. That gives me an inbuilt advantage over Pl@ntNet, which currently uses habitat and range fairly crudely through its themes, and doesn’t use the others. Phenology could be added, using date information from photograph EXIF fields, though the interaction of range and phenology might require large training sets - flowering times could be quite different between southern Spain and northern Scotland.

Using other features requires manual user input, which would take the system closer to the multikey apps that also exist (generally for smaller taxonomic groups). This is probably incompatible with ease of use for the general public.

Some of the results suggest that Pl@ntNet works on pattern matching, and lacks any semantic understanding of what it is looking at. For example, a subaqueous plant was among the offers for the floating leaves of a water-lily, i.e. it doesn’t appear to distinguish subaquatic, floating and aerial foliage. This could possibly be addressed by including suitable tags in the training sets.

Pl@ntNet’s initial training was based on image sets provided by experts. These are likely to be good photographs of plants. On iSpot we have the S295 participants, and other newbies, who don’t know what makes a good photograph for identification, and, for example, just show the crown shape of a tree. And we have experienced field botanists who put up atypical specimens of taxa, either to show the variation (like when I added an observation of Urtica urens seedlings) or because they’re uncertain about the identification. And we have the occasional introduction of “artistic”/“aesthetic” images. And images were the plant of interest is surrounded by other plants. All of these are likely to give Pl@ntNet problems.

Pl@ntNet’s training set also includes user-provided images. This can cause problems where the taxon is misidentified, such as the Fucus labelled as Posidonia that Derek found recently. It will also include some lower quality images, though whether that is good or bad is not a question to which I have an answer - one the one hand it obscures the gaps between taxa, but on the other hand it trains the AI on data more like that it’s likely to be offered by an average user.

1 Like

One other item to consider is that AI systems perform better than humans in an increasing number of areas of science when pattern matching is involved e.g. detecting certain types of cells but I always question where the limit will be given the number of species and limited number of pixels that the system is able to consider.

Dear all

Again our sincere apologies for the late changes to this second session which affected many of you from participating live. We are reviewing additional posts and working on further responses.

AI and iSpot: a spotlight on FASTCAT-Cloud
We really do need your input giving us feedback, raising questions while testing the PlantNet-API as well as FASTCAT-Cloud in iSpot.

Please join us for the next scheduled LIVE discussion with the FASTCAT-Cloud development Team and iSpot Admin on Wednesday, October 19th 5:30 p.m. BST / 6:30 p.m. CEST. 2022-10-19T16:30:00Z

This iForum LIVE session will be hosted slightly differently to the earlier sessions. We will be facilitating discussions simultaneously via LIVE chat in the iSpot Forum as well as via a LIVE video meeting.

Joining details: AI and iSpot: a spotlight on FASTCAT-Cloud

After registering, you will receive a confirmation email containing information about joining the meeting on Zoom. *Please note: turning your camera on and speaking in the Zoom meeting is entirely optional, you can watch, listen on Zoom and add your comments to the iSpot Forum chat.

We really hope you can participate and look forward to chatting with you in the session on October 19th!

Janice

I’ve just fed an ambiguous live/dead nettle (youngish growth with not enough resolution to see the stinging hairs) into Pl@ntNet. Global offers a variety of nettle species; Western Europe finishes off with Lamium album at 0.18%, but of the images offered for Lamium album some are clearly Urtica species, and none of them is obviously Lamium album.

I suspect that Pl@ntNet has picked the most similar images, rather than the most certain images, and there’s a larger selection of images of good Lamium album, but I’m still led to wonder whether the accuracy of Pl@ntNet’s training set is lower than I had thought.

I’ve looked for a report image as incorrect button, but failed to spot one.

The latest advance from Google Alpha is finding better matrix multiplication algorithms for various sizes of matrix than were previously known. I’m surprised to see that particular problem succumbing to pattern matching.

Pattern matching can still go badlt astray, as when the military accidentally trained an AI to recognise sunny clearings - they were trying to train it to identify tanks.

With taxa identified by cryptic characters, and overlapping gross characters, there’s a risk of seizing on statistical noise in the gross characters rather picking the reliable ones. (Humans are vulnerable to that as well , or at least I worry that I might be falling into that trop.)

From an evolutionary/ecological point of view it is interesting that the AI is fooled by the real vs false nettle, am sure there are papers in that. Especially interesting that it got it the wrong way round i.e. mistook the stinging version for the inert version.

It’s not clear whether it was fooled, or whether there were errors in the training set.

So are you saying that the AI got it wrong because the humans had got the wrong ID in the training set, so from your programming background are you on the side of the computers here saying the humans gave them bad data so they gave bad results?
Whatever the reason I think it is interesting and it is cases such as this that may reveal a lot about the system and its limits.

Hi, I just quickly read through this thread and wanted to check something please. I will caveat what I’m about to say with the fact that my IT skills aren’t brilliant so I may have missed something obvious. Firstly, it doesn’t seem to alert you that PlantNet has suggested an ID for you4 observations. I’ve only known that it has because Dejay has let me know via a comment so I’ve seen that in my changes tracker. As such, I have probably missed some. Could it be flagged as a comment in the changes tracker at all to prompt us to look? Apologies if that has been discussed previously and it’s not possible. The other thing I just wanted clarity on please is, if I do see them, how do I provide feedback - is it in the comments section of my own post? Also is the feedback just about how useful it has been or not?

Pl@ntNet gave a very low probability to Lamium album, and showed what was supposed to be 6 examplars of that species, which from their appearance were intended to be the subset of Lamium album images closest to the querent. There were no Lamium album flowers or fruits, but some of images had obvious Urtica inflorescences/infructescences.

There are two ways these images could have gotten into the dataset. Either they were part of the initial training set and whatever action taken to ensure that the images in the initial training were correctly identified was insufficient, or the AI misidentified the images when presented with them in a query and the user both accepted the AI’s identification and contributed the images to Pl@ntNet.