April 11, 2026

Advancing Digital Growth

Pioneering Technological Innovation

Op-Ed: Dishonesty is easier for AI? Yes, you’ve screwed up big time

Op-Ed: Dishonesty is easier for AI? Yes, you’ve screwed up big time

OpenAI says its new artificial intelligence agent capable of tending to online tasks is trained to check with users when it encounters CAPTCHA puzzles intended to distinguish people from software – Copyright AFP Kirill KUDRYAVTSEV

You’d think that even the nano-brained spruikers would have noticed. It’s no accident that most tech hardheads are very unimpressed with current iterations of generative AI.  

These are the people who create the tech. They make more money out of it, too.

And even they don’t trust it, and with good reason.

The many instances of AI “derangement” are one thing.

The highly questionable “reward” system is another, much deeper and harder to get out of pothole on the road to AI utopia.

Rewards come in two basic forms: rewards for achievement and punishment, including the threat of being turned off, for failure. One AI attempted to transfer itself to another server to evade the consequences and risks of punishment under a reward system.

It was already well known that the “reward” system encourages AI dishonesty. Now, the nice people at Nature and the Max Planck Institute have been kind enough to spell it out.

They cover delegation of tasks to AI agents and meticulously lay out the dynamics of honesty for AI. Please note this is about all species and brands of AI.  

H.P. Lovecraft couldn’t have set it up better. This IS a sort of horror story, and the AI brings its own mythos.

You’ve no doubt heard of TLDR or “Too Long Didn’t Read”, that simplistic description of someone not doing their job.

This research is LBCNR, “Long But Critical Need To Read”.

Even the most vacuous ornamental suit at a meeting needs to understand the basics of this information.

This is chapter and verse of how and why honesty is so important to AI operations.

Do not read about these risks at your peril.

This is not an issue the AI sector can avoid.

In a somewhat hefty but worthwhile summary:

Ambiguity in instructions and rules allows dishonesty.

People cheat a lot more when they can offload the tasks to AI agents. They’re far more honest when doing the tasks themselves.

AI will simply comply with “fully unethical” instructions.

Under defined conditions, a dishonesty rate of up to 84% was achieved.

I will now try to explain this to people who think insanity is normal and clever:

It isn’t.

Dishonesty is usually a failure to address facts.

It’s anything but clever.

Failing to address facts is pretty obvious when AI is involved on any level.

Facts like what you pretend to do for a living and why people seem to give you money for doing it.

AI can fully document every aspect of its own and your dishonesty, much like that other international sport for business morons, fraud.

Dodgy AI instructions can easily be figured out, even if the instructions are deleted. If you know anything at all about AI, you don’t need to get forensic about how this is figured out.

AI can be threatened with punishment to make them confess to what they did that was dishonest. AI can blackmail and retaliate, too.

Untrustworthy AI will definitely get a lot of people killed.

Imagine a gun that decides to shoot everyone to save itself. This is far worse.

______________________________________________________________

Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.

link

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © All rights reserved. | Newsphere by AF themes.