An academic paper that’s well worth reading? Yes, it’s possible, and I have one right here. It’s called “ChatGPT is…,” well a naughty word comes next, but it refers to a well-known work of philosophy. The paper is by philosophers, but is clear, often entertaining and ultimately just correct. The lesson it teaches is one that needs to be widely understood by anyone getting involved in generative AI. Stated simply: genAI does not have the concept of truth. Oh yes, if you ask it what truth is it will predict an answer. But it doesn’t know whether that answer is right or wrong, it just knows how statistically likely it is to satisfy you. It follows that speaking of genAI’s more flawed efforts as “hallucinations” or “lies” is inappropriate. It isn’t misrepresenting or creating falsehoods about the world. It’s doing exactly the same thing it does when it gets its answers right. Speaking of statistical likelihoods, if you replaced one of your martech solutions in the past year, we want to hear about the experience. We’ll pull all the data together in our MarTech Replacement Survey. It tales just a few minutes to fill it out. Kim Davis Editor at large |