In the last couple of years, a great deal of fuss has been made about AI (Artificial Intelligence), and about its impact on the future of humanity.
Suddenly out of nowhere, we are suddenly surrounded by ‘AI Tools’, that can generate images (like the one illustrating this article) and videos, as well as all sorts of other written content. Smart phones and other such devices now have ‘AI-enhanced capabilities’, designed to make them easier to use.
Now I don’t doubt that AI can be useful, and it can serve purposes, as I said just now, I use an AI tool to generate images from text prompts that I provide it, which is incredibly useful as it now means I no longer have to scour the internet trying to find suitable copyright-free images that I can use, without the risk of getting some copyright infringement notice slapped on me, which has happened before.

And then there are things like ChatGPT – where you can basically ‘chat’ with an AI, and it replies back almost as if you were interacting with a human being, in a very conversational style.
I wrote a post about this just over two years ago now:
Since I wrote the above post, search engines such as Google and Bing are now offering up ‘AI overviews’ at the top of their search results.

It is similar to what you would get if you asked ChatGPT etc the same question – a ‘summary overview’ of what it knows from trawling through internet search results.
Google has now taken things one step further with it’s new ‘AI mode’ for searches.

Search results are now completely gone, replaced instead with the ‘AI overview’.
What does this all mean then?
To save you reading the whole of my previous article I wrote, I’ll quote the ‘best bit’:
So as I pointed out, it is not beyond the realms of possibility that by replacing ‘traditional’ search engines with these AI-powered chatbots, one can find that the result of a simple search request could be easily ‘narrowed down’ to one result, in the form of a colloquial ‘conversation’, thus removing any other results from the equation.
Search results are already heavily weighted in favour of the ‘big companies’ that can buy their way to the top of search results, either through paid-for advertising or ‘spamming’ of certain keywords.
Maybe all these AI models suddenly being released to the public as ‘beta trials’ is not just a coincidence, but a means of ‘conditioning’ the public into becoming more ‘trusting’ of them.
But as I pointed out earlier, the danger arises when people stop thinking for themselves, and allow themselves to be guided by what the AI tells them.
https://thegrumpyowl.co.uk/2023/04/09/open-the-pod-bay-doors-a-conversation-with-an-ai-about-ai/
So with regards to search engines, what I cautioned about over two years ago is now happening: search results are no longer a list of websites relevant to your query, but an AI-generated ‘overview’, merging and amalgamating content to provide a ‘summary’ of what you asked about.
On one hand, it means that users no longer have to trawl through various websites to hopefully find the answers that they are looking for.
The downside is that users are no longer using a variety of search results and sources in order to do their own ‘research’. Remember those days of searching for stuff on Google, Yahoo etc, and then finding some ‘gem’ of a website that you added to your bookmarks?
Those days ended some time ago when Google decided to become another advertising platform, where it became the case that companies could ‘bid on’ certain keywords, and thus pay Google to rank them higher in search results as ‘sponsored ads’. You’ve probably already noticed for some time that most search results are dominated by ‘big corporates’, while smaller websites without the financial clout barely get a look in at times.
So what’s not to say that AI search engines and chat-bots already have an ‘inherent bias’ towards the same culprits?
Here’s what I get when I asked Google about ‘what happened at Birmingham airport today’:

The ‘AI response’ quotes Sky News, and lists news articles from ‘the usual sources’ on the right-hand side, which it has used to generate its summary. And by ‘usual sources’, I mean all the click-bait mainstream media outlets.
As a bit of a contrast, I was made aware of some incident that happened near my place of work, there was a road traffic collision this afternoon, which meant that the Stratford Road in Sparkbrook (Birmingham) was closed for a period of time.

Maybe my question wasn’t detailed enough! But the AI response was relying on information contained within a six-month old press release from West Midlands Police. Not quite “this afternoon” (August 6th!).
I had of course already checked on local media news sites as well as the BBC, and nothing had been reported about this crash. “If its not posted on social media, the local news won’t know about it” is something I often tell people. And it seems then that if the local news don’t know about something, neither will “the AI”.
So where are we heading then?
I remember a while back reading some article or watching a video asking the question “where did the rest of the internet go?” where you can put some query into Google search, and despite returning millions of results, after about page four or five, it simply ‘gives up’ and says ‘nothing more to view’.
Could the use of AI essentially make the internet even smaller, if it is only using a ‘top’ number of web search results in its training?
And does a ‘know-it-all’ AI model mean that people are less likely to learn and figure stuff out for themselves? Why bother to learn and build up your own ‘knowledgebase’ of information, if you can just ‘ask AI’ what to do each time you’re faced with some new challenge or conundrum?
Does it make the whole education system redundant? What about science, would people stop ‘experimenting’ to try and figure something out for themselves?
In order to answer questions put to it, any AI model relies on ‘established knowledge’ that it has been trained on.
Anyone who’s ever had dealings with “fact-checkers” should know what comes next! By “established knowledge”, I mean anything that is generally accepted as ‘factual’ and therefore cannot be questioned or reassessed. It’s already difficult for us ‘alternative thinkers’ or anyone with conspiracy theory interests to find information using Google searches, as many good sources of such information are regularly hidden or banished from sight, so you really have to know where to look.
It’s no different when chatting with an AI chatbot about such subjects. Go on, try it yourself!
When we become entirely dependent on AI to answer questions and give information, and then readily accept those answers without question, is that the point where human development comes to an end?
It has been said for some time that society is generally becoming more ‘dumbed-down’ and has done for several decades now. Smart phones have been making people dumb for some time, especially those so beholden to them that they rely on them to tell them what to do.
When people lose the ability to think for themselves, because they just rely on what an AI model tells them to think, society and civilisation as a whole is in deep trouble. That’s how you end up with a ‘hive mind’, where everyone thinks the same, and follows the same ‘advice’ and instructions given to them by ‘the AI’, because it is considered ‘trustworthy and honest’.
Creativity and imagination also become things of the past – why bother taking the time to learn how to paint or create art, or explore the world and take photos and videos, when you can just prompt an AI tool to do it for you?
In short, while I believe that AI does have some beneficial ‘real-life’ uses, there is a danger that once people become dependent on or beholden to AI responses, the human race becomes stunted or held-back from further progress, once we stop thinking for ourselves.