A Big Read item from the FT by Stephen M Fleming April 16 2021 https://www.ft.com/content/1ff66eb9-166f-4082-958f-debe84e92e9e
This is an excellent article and the work on self-examining systems is fascinating. I look forward to hearing more about that.
I find the “confidence” ascribed to AI s often misunderstood by potential consumers of AI/intelligent systems. The software itself is neither ‘confident’ nor ‘uncertain’, is it simple returning a result. The measure of confidence, as in an image being of a dolphin or not, is not the software’s certainty, rather it is an assessment of the validity of the algorithm given the parameters and structure provided in that case. Software can say that given certain observations, specific relationships can be inferred. For example a particular arrangement of pixel values in an image should be associated with the label dolphin. The human user is the one who has confidence, or not, in the approach.
Confidence, a word derived from the same root as fidelity, implies the consumer of the information sees some truth about the characteristics being considered. Current ‘AI’ systems cannot judge the fidelity of a result as they has limited ways of comparing the perceived truth to any other truth, other than information that they are given.
Confidence as it is used to describe the results of an AI algorithm has to do with how well aspects of the current observation can be mapped and rectified to past observations or rules.
Confidence as interpreted in a human context is more about projecting inferences from one context to another. The real danger is falling into the trap of not realising how small our experience base is and be falsely confident.
While both useful these are not the same thing. Knowing what you don’t know is important.
(Edited)