While funny, brings up a valid point.
What about models that can search the internet for answers? Or has access to search results, like a google or microsoft owned model, where it can access their internal data?
Now think about asking said model a question about a current event topic, historical event, or something where there could be ambiguity in the answer compared to something with a definitive answer (something factual with no gray area.)
That model looks in its own data, and also searches the web for current info and data. Then it comes back with an answer that is factual, but only in part because half the response is culled from some conspiracy type site.
Now consider the implications of the youngest generations taking whatever the answer was as 100% fact.
How much could that over time work against society and as a whole dumb it down considerably?
To me, that’s certainly one big implication of the current tech, and how companies are just jumping to it without any real thought.
We’re already seeing it being taken advantage of, like those ai friend/partner apps and how they can cause significant harm to people with weakened mental health.
Where’s the line in the sand? And how in the hell does a system not obey it’s instructions, as mentioned above with the business scenario? How is it more susceptible to bad actor status in production vice testing? Why are the guards being breached?
Did we not learn from what caused Hal to go berserk? (conflicting instructions.) OK, that’s a movie, but it’s funny that we’re coming to a similar problem as life imitates art.