April 11, 2026

Advancing Digital Growth

Pioneering Technological Innovation

Chatbots learned to write from us. Can AI now change the way we think?

Chatbots learned to write from us. Can AI now change the way we think?

As AI saturates the internet, researchers say it’s changing the way we write — and, potentially, the way we think.

A recent report (new window) from online security platform Imperva found automated traffic surpassed human-generated activity online for the first time in 2024. While experts told CBC News it’s impossible to say definitively whether that’s accurate, they do note there’s more AI online than ever before. 

And as people increasingly turn to AI-powered chatbots in their everyday lives, experts suggest they’re mimicking the language chatbots tend to use. Some worry this is creating a feedback loop that could shrink human creativity and potentially alter our thought processes.

I do worry about the homogenization of language being the canary in the coal mine for the homogenization of thought, to where AI starts to really influence not just what we say, but how we think, said Canadian futurist Sinead Bovell, who founded the tech education company WAYE that advises organizations on emerging technologies.

She says more than half of text online is now likely generated in full or in part by AI.

Bovell says she’s noticed a uniformity in the way people write on social media platforms like X and LinkedIn, as well as sites like Substack, the blogging and newsletter platform.

Several experts told CBC News that this growing homogeneity of language online is also making it increasingly difficult to parse what is and isn’t written by humans.

Bovell says some of the hallmarks of AI writing include symmetrical clauses such as, It’s not just X, but Y, words like moreover, the use of lists and bullet points, metaphors that often don’t make sense and a generally bland, neutral tone. 

Chatbots learned to write from us. Can AI now change the way we think?

Morteza Dehghani, professor of psychology and computer science at University of Southern California and director of the school’s Center for Computational language sciences, studies the homogenization of language online.

Photo: Submitted by Morteza Dehghani

AI also impacts human thought, reasoning

Morteza Dehghani, director of the University of Southern California’s Center for Computational Language Sciences, says his research bolsters his concern about the homogenization of language and thought.

We’re losing the variance and perspective that we see in human societies. And this is going to affect our reasoning as well, Dehghani said. 

In a February study (new window), Dehghani and a team of USC researchers analyzed language found in Reddit posts, scientific papers and American community newspapers from 2018 to 2025.

They found a spike in AI-generated text in late 2022, which they note corresponded with the release of OpenAI’s ChatGPT chatbot. They also found a drop in the variance and complexity of written text since that spike. 

LISTEN | How AI can impact your brain (new window)

Dehghani says this indicates that even writers who aren’t directly using large language models (LLMs) like ChatGPT seem to be trying to adapt to the writing structures they see in an online world increasingly overrun by AI.

You want to write in the same fashion that your readers are exposed to, or are used to, he said. We’re just getting into this loop of homogenization. 

In a separate paper (new window) published this month, Dehghani and other USC researchers argue that these homogenizing effects of LLMs on writing are carrying into human expression and thought, noting that the LLMs reflect and reinforce dominant styles of writing while also marginalizing alternative voices and reasoning strategies.

Bovell, the futurist, says because data used to train AI comes from the internet, it tends to reflect the loudest and most dominant online voices, meaning groups and cultures that are historically marginalized typically aren’t used in that training data, which adds to the homogenization.

Sinead Bovell.Enlarge image (new window)

Futurist Sinead Bovell says that as AI use has grown, she’s noticed a uniformity in the way people write on various online platforms.

Photo: Submitted by Sinead Bovell

This problem is compounded, she said, by the fact that most AI we use comes from a handful of American companies.

At the end of the day, she said, these companies are building the foundation of the future, and that’s something that we all need to really think about.

AI increasingly training on its own content

John Licato, associate professor in Computer Science and Engineering and director of the Advancing Machine and Human Reasoning Lab at the University of South Florida, says the amount of AI and bot-generated content we’re consuming is much higher than it’s ever been, and will probably continue to increase.

He says that determining the exact level of automated content versus human content is especially hard because humans amplify posts made by social media bots, and vice versa.

WATCH | Your next job interview could be with AI:

He says the internet has reached a point where it could continue to function in an imaginary scenario where humans stopped using it. 

That leads to another problem with machine learning, where generative AI systems trained on their own content produce increasingly worse and more biased results. This is something Licato says is already happening.

When you have AI that’s continually trained on its own data, things like biases get worse after multiple generations. Mistakes get worse after every generation, he said. That is the kind of thing that we would expect to happen if humans just dropped off the internet.

The push to preserve diversity

Those who study AI and language suggest there’s still a lot to learn about this homogeneity and how to address it.

In the USC paper published in August, the researchers concluded that preserving and enhancing meaningful human diversity has to be central to the design and development of AI. If we don’t pay deliberate attention to that diversity, they warn, we won’t be able to harness the full potential of the technology without sacrificing the very diversity that defines human society.

To push back against homogenization in LLMs and make them work in the public interest, Bovell says we have to figure out how to mould data in a way that’s more diverse, something that’s not always a priority for private companies.

It’s also important to have open-source models that allow anyone to modify them, Bovell says, as opposed to proprietary models like ChatGPT that don’t make code and other details available to the public. She’s encouraged that countries like Sweden (new window) have begun working on “sovereign AI (new window)” models to reflect their local cultures.

Bovell says that if AI is going to impact and shape languages that are the shared fabric allowing society to reach consensus and move forward, you want to make sure that these tools are reflective of the people in your population and the breadth and depth of the diversity there.

LISTEN | The decline of the internet (new window)

ABOUT THE AUTHOR

Kevin Maimann (new window) · CBC News · Digital Writer

Kevin Maimann is a senior writer for CBC News based in Edmonton. He has covered a wide range of topics for publications including VICE, the Toronto Star, Xtra Magazine and the Edmonton Journal. You can reach Kevin by email at [email protected].

link

Leave a Reply

Your email address will not be published. Required fields are marked *