Like others, I’ve been experimenting with OpenAI technology and all the associated tools that have popped up.
The rate of adoption and integration with existing platforms has been astounding. It’s both awe-inspiring and terrifying.
Mostly, the terrifying part is related to the uncertainty it poses for many economically. OpenAI tech is fundamentally changing how humans work, think, and interact.
It’s making our already crowded social and information channels more crowded. We’re probably spending more time reading AI-generated content with AI-generated images than we’d like to acknowledge. But it’s also vastly increasing the speed to market for internal and external products.
For example, a few months after ChatGPT exploded, HubSpot and Salesforce both announced companion “ChatGPTs” of their own. After testing, it generates quick answers to questions about my specific CRM data that I would have had to hire someone to build or take a couple hours to build myself.
Is the technology perfect? No, and neither are the human workers that it’s been slowly replacing. Does it make mistakes and lie? Yup. But at a fraction of the cost compared to an equivalent behaving human.
Will it replace experts in specific niches, make their caliber of work easy to produce with some short AI training and some refined prompts? Not in the short term (most likely).
What this means for the beginner tier of white collar workers
This is where things get dicey. Do companies cut workers that were doing admin/low-level type of development work? With one person paired with ChatGPT and other similar tools, they’ll be able to be as productive as 3 or 4 of the beginner tier of white collar workers.
Will their output be great for the long-term economics and stability of the business? Probably not, but the majority of companies will take the short-term gain as long as things can still get done to make more money in the short-term.
But as we saw with off-shoring, sometimes the juicy margins are just too juicy to care about the degradation in quality that can occur.
The bottom 50% of BI devs
This sucks to say, because we all were at the bottom 50% when we started learning. But the future doesn’t look bright for those who build BI solutions to simple questions.
With HubSpot and Salesforce, it’s already possible now to ask it a question and have it build simple reports for you. This capability will only creep further into cloud platforms that house your data. I wouldn’t be surprised if in a couple years, you’ll be able to load all your data into Snowflake or AWS, and ask it to generate insights for you.
So what’s going to happen when you need some charts for straightforward KPIs and metrics inside of Tableau? Or PowerBI? Or Excel? Or any other tool? Are you going to go out and hire a $50-$80k/year employee, a $100/hour+ contractor, or ask your AI companion that can spit out an answer that an executive gut checks and then moves on?
More advanced BI solutions will still be valuable. Ones that provide answers driven more by complex architecture and knowing where to look for previously unexplored connections in data. But even that might not be far off (look at the Data Guide in Tableau for example).
This topic impacts my own business (MergeYourData.com) and the consulting areas we focus on going forward. It certainly is going to make our demonstrated expertise beyond the basic topics more and more important
Overall, ChatGPT and other OpenAI tech brings up a lot of questions for the near future of humanity.
Will future generations be handicapped or elevated because they no longer have to struggle through the learning process of the basics?
Economically, how will we restructure to compensate for an explosion in data and less need for humans to manually generate all of that?
What will happen to perceived low value employees? Will they be reassigned to other work or cut entirely?
p.s. Was this article written by me or by ChatGPT? How can you tell? Does it impact how you feel about reading it if I told you it was written by ChatGPT? What if I told you it was written by me?
This is where our future is heading. Real and virtual are getting blurrier. The question is how humans will adapt to the furthering distrust of anything virtual. After all, how can you know it’s real and from a human, for a human? beep boop boop beep