Skip to content Skip to sidebar Skip to footer

ChatGPT Just Offered Some Lame Ideas On How Elon Musk Could Restore Our Trust


An AI can’t actually feel emotions. That might come as a surprise to anyone who has tried the ChatGPT AI, which can converse with you and even solve riddles (sort of).

Recently, I’ve started viewing the bot as highly capable and even helpful to people in their jobs, but sometimes ChatGPT is not all that different from Wikipedia.

You type in a question like “Who is Tom Brady?” and you’ll see a response that reminds me of something Google or Siri might say. I suppose this saves time, since you can stay within the ChatGPT interface and not go hunting around for answers. The “answer” is a bit boring, though. If you ask whether Brady is the best football player of all time, you’ll see a measured response stating the pros and cons. Ho-hum.

ChatGPT is merely a research demo that can’t understand real emotions and seems to balk at questions where a measure of human empathy is required. The bot will often say things like “I don’t have enough information to make that judgment” — especially if you ask whether Brady should have retired and not played football at all. (It takes a real human to weigh in on that question. I tend to think he should have retired.)

One area where ChatGPT really struggles, and where it becomes a little more obvious that the bot is not as advanced as some of us suspected, is when you ask open-ended questions. I recently inquired about what Elon Musk could do to restore trust in himself, since he seems to be losing favor with more than just Tesla owners and investors these days. Twitter is a hot mess, but the bot doesn’t seem to know that. (The bot is not up-to-date on recent news.)

The bot replied in a measured, practical way — I added the entire response below after this column. It’s a bit lame, though. The answer could apply to just about anyone, although the final answer about social responsibility isn’t too bad. I was looking for something more nuanced, perhaps suggesting that he shift focus away from Twitter and regain trust by running Tesla more intentionally. The AI should have known at least something about Musk and how he operates. Some of Musk’s ideas for Twitter related to the blue check verification have landed with a thud. He could restore trust either by making better suggestions or not saying anything at all.

AI can easily churn out encyclopedic information, and we’re now used to that. Ask Alexa a question about the weather, the NFL, or a politician and you’ll hear a few facts but nothing remotely like human insight. For about the last decade, I’ve been hoping bots could do more than feed us facts and provide actual advice. “The weather is going to be quite cold the next few days, so remember to pack a blanket and an emergency kit in your car” is something that Wikipedia would never do. In my example with Brady being the best ever, a truly brilliant AI would crunch the numbers and skip all of the objective mumbo-jumbo. A smart AI might say: “Based on analyzing every NFL player in history, these stats would lead me to believe Tom Brady is the best ever.”

Bots like to stay objective; humans are incredibly subjective. With Musk, the bot delivered a generic response that wasn’t that helpful even if it was practical. The key to insight is when you mix facts and opinion in a way that leads to something worthwhile. I wanted to see a reply that was far more compelling and memorable; the bot could have analyzed previous tweets by Musk (or looked at replies from 2021 and picked up on tone), flagged words and phrases that have led to mistrust, or parsed out the overall sentiment online. Facts are easy, but sentiment is hard.

ChatGPT itself isn’t lame. I like how the bot can write social media captions and even entire articles with ease. However, the AI has a long way to go when it comes to understanding nuance and emotion, and also analyzing factors like perception, interconnectedness, and social understanding. Humans have a unique ability to comprehend the facts and then extrapolate something helpful.

Here’s hoping ChatGPT grows up and becomes more human this year.

For now, it seems like a glorified version of Wikipedia when you really start asking the tough questions.

Here’s the full ChatGPT response when I asked about Musk regaining trust:

1. Be transparent and honest: Musk should be transparent and honest about his actions, decisions, and plans, and be willing to admit and take responsibility for any mistakes.

2. Communicate effectively: Musk should communicate effectively with all stakeholders, including employees, investors, customers, and the general public, and be responsive to their concerns and questions.

3. Follow through on commitments: Musk should follow through on any commitments he has made and deliver on any promises.

4. Be consistent: Musk should be consistent in his actions and decision-making, and avoid making contradictory statements or changing course frequently.

5. Show humility: Musk should show humility and be open to feedback and constructive criticism.

6. Show integrity: Musk should demonstrate integrity by holding himself to high ethical standards and acting in the best interest of all stakeholders.

7. Be accountable: Musk should be accountable for his actions, and be willing to admit when he’s wrong or made a mistake.

8. Invest in CSR: Musk should invest in corporate social responsibility initiatives, such as sustainability and community development, to show that he is committed to making a positive impact.



Source link

What's your reaction?
0Smile0Angry0LOL0Sad0Love

Add Comment

0.0/5