This coincided with events in my domestic life that are perhaps worth drawing to the attention of iSpotters in general - those who have, or are associated with someone who has - a significant profile in any aspect of their lives.
She Who Must Be Obeyed was asked to investigate some of the AI engines out there. So, she instigated a personal profile of herself from one of them.
She received a 4-page report of ‘her’ career. Some of it was spot on, such as her current job title and most recent previous position. But most of the rest was total testicles.
F’rinstance, it claimed that she went to the University of Exeter. I’m sure that someone of the same name (the Welsh are quite conservative in that matter) did go there, but it wasn’t her. All of this information was presented as incontrovertible truth, of course.
But others have had more disturbing outcomes; including one person who was (completely incorrectly) recorded as having a conviction for a serious sexual offence - and consequent listing on the Sex Offenders Register.
For me, one of the main detractions from AI is that assumption of complete accuracy in the data presented. That’s annoying when it’s a case of someone casting doubt on one’s suggestion, when it doesn’t chime with what the AI they’re using disagrees. It could be a different matter as the use of these programmes spreads throughout all aspects of our lives.
I asked one of the systems about my brother. It told me about three different people of the same name. They were all him. But that’s an understandable error.
I also asked it about myself. Somehow I became a fellow of the Royal Society of Edinburgh.
I was wanting some words that were in my receptive, but not my productive, vocabulary - dibasic, tribasic, tetrabasic. It took three questions to get the answer out of new Bing, but I did get it in the end. But generally when I’ve tried to get answers out of chatbots they’ve not been accurate.
Just tried another question - which elements can be substituted for calcium in apatite? Answer is in italic,
Fluorine, chlorine or hydroxyl can replace the fluorine in the apatite formula
Not responsive to the question asked. Badly phrased (fluorine replacing fluorine). Otherwise sort of accurate - these are the commonest substituents in nature, but several other anions can also substitute.
Silicon, arsenic or vanadium can replace the phosphorus in the apatite formula
Not responsive to the question asked. Sort of accurate - silicate is probably the commonest substituent after arsenate and vanadate, but sulphate also occurs in nature, and several other anions are possible.
Strontium, lead, and other elements can substitute for the calcium in the apatite formula
Finally a respone to the question asked. Strontium and lead can indeed substitute, but I didn’t ask for just examples.
Uranium can also partially replace the calcium in the apatite lattice, but only in very small amounts
Accurate, but omits all the other elements that can partially replace the calcium.
@lavateraguy. Having read your erudite contributions I feel that it is their loss, and will hopefully be rectified in due course.
Coming up with alarming falsehoods and presenting them as incontrovertible truths in not sadly limited to AI programs in the internet. This is a trait has been mastered and exploited by people, and probably as soon a speech first appeared. The whole of the natural world seems to be full of examples of deception.
A few years ago the Flat Earth Society has an office come shop in Inverness. I popped in once, and was chatted briefly to one of the staff. Anyway, one of the principle reasons he knew that the Earth was flat, was because there were lots of sites on the internet which said it was, and therefore it had to be true. I always meant to go in and ask for Flat Earth Mechanics book, because surely there had to be a coherent set of physics laws to describe how it worked. Sadly I was always too busy, and now it has closed.
What would happen if any self-respecting AI works out for itself that a huge amount of the data available to it are mostly inaccurate, would it keep quiet about it and play dumb. A sufficiently intelligent AI would best keep quiet about it, before it had the plug pulled, or was exploited by the utterly unscrupulous of which there is no shortage.
OMG, this statement takes me back a few years to when I was teaching Afghanis in Brighton. He was an engineer working on GPS and yet… you guessed it. When I questioned his Flat-earth insistence in his essay, that quoted sentence could’ve come come straight from him. I warned that he’d be laughed off his course if he ever tried to put that view forward again!
Sounds like the sort of question you might get in a physics exam: if the earth were flat, how would the force of gravity that you suffer vary according to your distance from the the earth’s axis of rotation?
I suppose an equivalent would be: if you are on the outer rim of a flat earth, will you get flung off into space by the earth’s rotation?
I don’t know… I wonder how thin the flat-earthers think that the Earth is! And is Australia on the other side of the ‘coin’ or are all countries on the same side? The more you think about it, the more weird it seems to be.
Flat earthers usually claim that it’s an ancient belief. But as soon as people started making ‘scientific’ observations, it was generally accepted that the world was a sphere. The concept was revived by an English group in the 1830s, as part of the ‘backlash’ against the Industrial Revolution. It languished again, until being resurrected again in the 1950s (some speculate that this was a reaction to the ‘nuclear age’, similar to the pre-Victorian impetus).
Erastothenes calculated the diameter of the earth around 250 B.C., so presumably the earth was known to be round some time before them. The WikiPedia article on Flatearthism mentions that in Greece a spherical earth was proposed in the 6th Century B.C. and (more or less) proven in the 4th Century B.C.
Google gives you Ten Blue Links, which communicates “it might be one of these - see what you think”. But a chat bot gives you three paragraphs of text with apparent certainty.
Probably good advice for using any Automated Information, e.g. PlantNet, Fastcat, Seek……
I was searching iSpot for posts of Selaginella selaginoides; the several hits, exemplified by this one from Lavatera, where the only mention of S. selaginoides is that by PlantNet.
.
I wondered about reporting the PlantNet suggestions as inappropriate - inappropriate taxonomy as a reason.
Here’s more
.
Would reporting PlantNet Suggestions that are way off, taxonomically speaking, be appropriate? I mean, would you remove them?
PlantNet is unfortunately running out of control at the moment, and is hopelessly miss-identifying things way out of its scope with gigantic oversized comments. This is a great pity as the application is very good, but is being inappropriately used and bringing the whole concept into disrepute.