“THOUGH instinct tells you, Scæva, how to act, And makes you live among the great with tact, Yet hear a fellow-student; ‘tis as though The blind should point you out the way to go,”
— The Satires, Horace, translated by John Conington
Would you drive down the highway if you could not see the road?
This would seem dangerous.
What if someone was yelling instructions – turn left now, turn right?
This still seems dangerous, though it’s possible you’d make some correct turns – especially if you drove slow. Perhaps you’d even survive the experience.
I suspect though, that few, if any, of my readers would volunteer to drive blind – even if they had someone feeding them outside information. It is simply too important to the act of driving to see what is ahead of you.
This point is deeply related to the current use of LLMs. On a very fundamental level, LLMs do not perceive the world the way humans do – or, arguably, at all. They are, quite literally, word prediction machines. They are told certain things about the world – just like that driver is told when to turn. Just as you wouldn’t trust your life to a blindfolded driver, even a very smart one, you should be quite cautious about trusting an LLM.
They should not be relied upon to make judgements. They can make decisions – “what color should this background be” might be an acceptable decision. Judgements like “is this scheme secure”, “will this business model succeed in the market”, or “should this person receive a loan” are, however, are judgments – not an appropriate use of LLMs.
As we will discuss in this article, there are, however, some serious cognitive traps you may fall into. There are serious risks related to some usages of LLMs, and those risks become near certainty if you do not have an accurate understand what an LLM can advantageously be used for.
Making this worse is the very large class of people who stand to gain financially from the adoption of LLMs. While some of these people may be very intelligent, make no mistake: they are not to be taken more seriously than the president of PepsiCo speaking about how wonderful Pepsi products are.
To be clear, LLMs have utility, as does AI as a whole. I worry, though, that people need a better intuitive understanding of what strengths and weaknesses LLMs have. To deploy LLMs accurately, and in a way that will benefit a business, executives need to really deeply understand what they do – and to do otherwise is potentially very risky.
In some domains, the use of LLMs already become quite accepted. For example, the use of AI code generation has become quite common. This is hardly surprising; even before LLMs, code generation tools were very common. Many such tools ask the user questions and then insert those questions into the code. It is not surprising, therefore, that generating repetitive code via an LLM assistant is a common task at many companies.
Interestingly, Anthropic has announced a new product called Code Review (…)
Read the whole Article here. from Durable Programming
Photo from Wikimedia Commons