The OpenScience Laboratory The Open University

iSpot Forum

Has anyone seen / used [email protected] Identify?

I agree that when the poster is given the id & little else it may not lead to learning.

However, when “we who know” offer some ideas in the comment or id box on why it is what it is, (or might be) then there’s an opportunity for learning.

To do this effectively we need to judge, from the poster’s activity tracker, at what stage they are at; then we can offer appropriate help.

But that takes time… so we do what we can with the time that we have… responding at any level has to give us satisfaction too.

Edit. Just a bit later I came across this post https://www.ispotnature.org/communities/uk-and-ireland/view/observation/786414/duckweed-leaf-miner-walks-on-water
where the comment trail is as bluebirdresearch says Fabulous.

1 Like

Link to probability… I can’t fnd it…

Yes, I’ve completed that course.
May I take issue with your comment “AI might really distinguish between two real species but it won’t be able to tell you how it has distinghished them”? Surely this is simply a feature of the particular implementation of the AI? It would be possible (perhaps even easy) for the system to provide a record of it’s decision-making process even if “fuzzy logic” techniques are used. If an AI system can’t do this I would doubt very much if it can be trusted; it may be, of course, that this isn’t implemented in the public-facing version but only in versions subjected to appropriate peer review.

Rule based expert systems can tell you why they’ve come to particular conclusions. If I understand correctly neural net based systems can’t - their behaviour is an emergent property of a set of weights. For example AlphaZero can beat the best human players at Go, Chess and Shogi. It does this by having taught itself a better algorithm for evaluating board positions.
Even human experts can always tell you why they came to the conclusion that they did.
The next milestone for AI might be protein folding.
One technique used to identify morphologically distinctive units is principal comments analysis. An AI might end up doing something equivalent - using correlations of characters to identify units. For example, given a sufficiently large sample of DNA identified specimens of Sonchus asper and Sonchus oleraceus it might be able to identify the ambiguous specimens with superhuman accuracy and even identify the hybrids. But some degree of caution is necessary - the accuracy depends on the training set. For example a AI trained on British Leontodon may not work when exposed to specimens from the continent.
As a person who’s not keen on morphological species concepts I don’t think that an AI can divide a set of individuals into species from first principles - on the one hand morphs, castes, sexual dimorphism, metamorphosis could result in false positives, and cryptic species in false negatives.

1 Like

Exactly! There is an algorithm - a (self-)developing algorithm but an algorithm nevertheless. The system should therefore be able to describe both the methods it is applying (the algorithm) and the methods it uses to modify the algorithm.

But humans won’t be able to understand the particular bit (part of the database it has created plus code) that distinguishes the particular species. Getting the computer to explain this somehow might be a next stage.
The humans developed the ai then let it run but the last part is still missing, getting the computer to tell and explain exactly what it has used to separate the species. This is a general issue with ai doing this type of task, it is not unique to species identification.
Am sure there will be much more on this over the next couple of years.

Another example from PlantNet Identify. (The first two offers were at least in the right order - and nearly led me astray be confirming my initial guess - but it then went into the weeds, but the low probability scores.)