AI - Incorporating History and Human Nature for Personal and Business Strategies
Image from “Fury” Columbia Pictures, 2014
Human Nature
“Wait until you see it.”
“Where in the world are you going to find these angels to organize society for us?”
"Power tends to corrupt; absolute power corrupts absolutely"
Lord Acton – British Historian – 19th Century
“Leadership”
One of the more intriguing aspects of discussions about AI impact on business and careers, is the dearth of discussion regarding the historical behavior of decision makers (aka “The Powers that Be”) regarding just about anything over any stretch of time going back over 5,000 years. Multiple studies have identified the prevalence of sociopathic and psychopathic tendencies in company CEOs:
- 2010 Study by Tomas Chamorro-Premuzic, Psychopathic tendencies in CEO’s could by 20 times higher than the general public
- 2006 Babiak and Hare’s "Snakes in Suits", 4 times higher prevalence
- 2016 Australian Study by Nathan Brooks, 21 times higher percentage
- 2011, 2017, 2019 Clive Boddy’s Case Studies, 6 times the psychopathic traits & 10 times the level of dysfunctional traits
- 2021 Simon Croom, about 12 times the prevalence of psychopathic tendencies
Psychopath - A person having an egocentric and antisocial personality marked by a lack of remorse for one's actions, an absence of empathy for others, and often criminal tendencies. (Merriam-Websters Online)
What is interesting about the studies and the wide range of values is something that may be missing. Something that is not factored into this data, which may be depressing the numbers not inflating them, is the fact that CEOs on average will be of higher intellect than average. I’ve had reports with challenging leadership behaviors that were very intelligent. They were so intelligent as to make it impossible to try and “trick” them into revealing through conversation or testing these deficiencies that were clearly obvious in their day-to-day behavior. They knew exactly the right thing to say, they just didn’t believe any of it. Success can breed arrogance, and many people with these social deficiencies and high intellect can achieve high enough success that they eventually believe their intelligence is beyond “common people” to match or see through. It is an elitism which allows rationalization of many self-aggrandizing and self-enriching decisions.
Most people, including some intelligent “leaders” lacking empathy, know that authority without empathy isn’t leadership, and yet that combination ends up in charge a lot. Why? Because many company owners, or stewards for owners, ultimately are far more concerned about the ends than the means.
The videos below are from the movie Margin Call. From a crisis, decision making, and execution standpoint this clip is neither admirable nor damning from a leadership standpoint. Yes, there is some self-preservation involved, but actions to stave off immediate extinction with long-term value consequences would drive most prudent employee leaders down the same path. It’s the dynamic, which many of us have witnessed at one time or another, between Tuld and Sullivan regarding the consequences of their actions and clear lack of empathy from Tuld.
Why is this important? Because a significant number of people that will be making decisions about using AI, ready or not, are solely motivated by personal benefit and short-term outcomes with no concern or consideration for the consequences of those decisions on people, society or even the long-term consequences to the company they’re supposedly responsible. To be fair, a thoughtful leader and psychopath may end up with the same strategy. Even CEO’s answer to someone, and they can’t always unilaterally act, even when they might have the right decision for the long-term they’re not allowed to implement. Even with the same strategy outcomes, the process in which they’re executed will be different from the thoughtful leader.
Ownership
Ownership of companies set the expectations on company and company leadership’s performance, incentives, etc., which will influence the pace and breadth of AI adoption. There are only three types of ownership (venture capital and private equity are basically the same in terms of ownership type even if there are different expectations of investment risk) and combinations of the three:
1. Public companies
2. Private companies
3. Private Equity (& VC) owned companies
The most divergent company type in terms of the company’s purpose and primary objectives is privately owned companies. Ownership in these companies may be motivated by priorities other than enterprise value, cash generation, etc. like their employees well-being (family focus), religious ministry, community positions, etc. Some private companies, particularly those founded by serial entrepreneurs, may have the same dominant leadership drivers as public and PE companies. AI adoption priorities for these companies are not as predictable as the others.
Public and PE-owned company ownership are something entirely different. While there are a minority of exceptions in PE, both ownership types are typically driven by short-term ownership financial goals. It’s quarterly for publicly traded companies and 3-5 years typically for PE. PE has the added capability of harvesting value via fees or special dividends paid with by debt absorbed by the portfolio companies.
There has been a dramatic shift in the publicly traded companies since 1997, when publicly traded companies peaked at 8,020. Today, there are less than 4,000 publicly traded companies. Where did they go? Consolidation accounts for some and private equity even more, which I’ll cover shortly. Over the same period, market cap values have increased by over 500%. Some of these are new companies and industries, but the biggest driving force has been the rise of the passive equity investors (aka 401K). Regardless of the original intent of 401K’s, in practice they increased the percentage of capital moving into equities impacting average P/E ratios and other long-standing metrics. They also gave rise to fund stewards like BlackRock, Vanguard, State Street, etc. wielding incredible influence with essentially voiceless masses of passive investors. While these stewards are also involved in pension investing, 401K’s are dramatically different in terms of legal accountability and the reason that every public company eliminated pensions for salaried employees and where possible with hourly employees. As an example, older Dow chemical employees still holding a defined benefit pensions are the responsibility (obligation) of Dow to insure are met. Dow might work with BlackRock, Vanguard, etc. to manage their pension fund, but they actively manage any investments in the fund as shortfalls due to poor performing investments of the fund must be made up by Dow. Dow’s obligations to 401K’s are the immediate matching money and nothing more. Investment losses in 401Ks are the employees’ problems. At this point, investment funds from these stewards have been configured in such a way that the top three to five steward companies (mentioned already) are the majority of the top three equity holders of every major company in whole industries.
Top 3 Equity Holders in the 5 Largest US Airlines in 2025 (Source Statista & Public Filings)
The key takeaway for publicly traded companies is that strategic decisions of any type, like AI deployment and justification, are not driven by the voices of multitudes of people that own company stocks directly or indirectly. It’s driven by a few voices who in many cases have the highest influence on entire industries. Publicly traded US companies no longer participate in a truly free market as concentrated outside influence can outweigh the will of even a majority of stockholders.
The dramatic drop in publicly traded companies in the US since 1997 is also directly linked to the rise of private equity. In 1997, assets under management within private equity were less than $1 trillion. Today, that number is around $13 trillion, or +1,200%. Where in the past founder entrepreneurs might advance a company to IPO to grow and cash in on the success of their private company, the reporting requirements and regulatory requirements for going public could no longer be justified with money pouring into private equity where the reporting requirements are far less. Private equity was also able to offer up front multiples through leveraged buyouts that couldn’t be guaranteed in an IPO. While the same stewards of pensions and 401Ks are also in private equity, private equity is far more dispersed in terms of ownership. The concentration in private equity, which is somewhat less concentrated than the publicly traded side, is with the lenders. While PE may not be quarterly focused, the majority of PE firms are only focused on extracting a benefit from purchased companies as quickly as possible through a flip (typically to another larger PE firm), fees/dividend extraction, IPO occasionally, or some combination of the first two. PE firms buy through leverage with dramatically less equity investment exposure so their needs to “be in the green” are far less than the cost to purchase. Their best outcomes are strategies that maximize the company’s enterprise value at sale, which isn’t in many cases the best strategies for long-term enterprise value.
In terms of AI adoption, understanding that ownership decision making is highly centralized and short-term focused will drive hiring of portfolio company leadership that executes to those requirements. In manufacturing, multiple sector focused PE firms are already initiating or have on their books AI expectations for their portfolio companies.
FULL DISCLOSURE – I have worked and been responsible for profit and loss in public and private equity companies. Within the latitude of my authority, I’ve endeavored not to be psychopath with my people, but I also have always believed firmly that it’s not my role to crusade with someone else’s money. My argument for anyone that might be conflicted with that mindset, you don’t have to take the job. Once you take the job, ownership is the ultimate setter of priorities and outcomes. Executing as they direct or require is not psychopathic. Executing to their expectations with lies to your people, an absence of empathy, and putting yourself first above all others is a psychopath problem. If you’re creative and talented enough, you might be able to find strategies that meet ownership’s needs in a different way.
What would Rick do? If I’m asked to lead a PE owned manufacturing company in challenging times like today but has some options to leverage AI, I’m immediately going to look at the top of the organization to use AI to leverage cost out of the company, including my own position. Why couldn’t the CEO be a fractional, or the CFO, head of HR, etc.? People robbed banks because that was where the money was. The highest cost concentration is at the top of the organization. Cutting 100-foot soldiers that are fighting battles to preserve a high-cost full-time CFO who can’t even shoot when all our data is digitized is why so many of these companies are now failing. Why pay Rick as a 100% full-time CEO when the company is mature and in a commoditized landscape, when you only need 20% of Rick for strategy and other high-level functions? Just like the insurance company slogan, “only pay for what you need.”
AI – Human Nature and History “Law No. 1”: The most important group of people with respect to any technology and its deployment are the government and business leaders that finance it. It’s their repeated historical behavior that is most relevant to forecasting the future of AI adoption and deployment.
As covered in another AI article, J.P. Morgan didn’t invent electricity. He funded those that did and leveraged all the “motivational” tools at his disposal to coerce and drive the intellects/inventors to work on his behalf. When he realized he was funding Tesla’s wireless electricity transmission project at Wardenclyffe Tower on Long Island, NY he not only stopped the funding, but tore the tower down so the research couldn’t continue. Why? Because there was no way to monetize wireless electricity.
What has been a bit more disturbing about some of these business and government leaders is the widespread affinity for the theories of Malthus, Paul Ehrlich and the like. You can research these yourself but summarizing: These are the prophets of population doom that have never had even one of their predictions come true but like gum on your shoe, their unsubstantiated and unproven theories won’t go away. Bureaucrats and academics love the population doom theories as justification for central planning by the credentialed (a good video on this: Why Intellecctials are F@#king Idiots.). The wealthy that subscribe to these theories, in part, appear to do so as plausible justification for their insatiable hoarding of resources. It’s questionable how many of those people believe those theories.
The Inventors and Creators
Some points:
- Intellect IS NOT Critical Thinking
- Intellect IS NOT Maturity (arguably maturity is a prerequisite for critical thinking)
- Intellect IS NOT Wisdom (also requires maturity)
- Intellect IS NOT Augmented by “Emotional Intelligence” (EI is a made-up term so that lower IQ people can claim “bonus points” from a subjective measure)
- Intellect IS NOT Always accompanied by Creativity
- Intellect IS Much like a computer, the abstract, symbolic, etc. information processing capability of a human
- Intellect IS Diverse across a multitude of disciplines and very few operate at a high intellect across many disciplines
Throughout history the prevalence of stories of how “genius” inventors were manipulated by others to do their bidding, use their inventions for nefarious purposes, or to steal their ideas is very high. It’s even in mythology. A personal and anecdotal observation of key historic inventors and non-historic inventors I’ve crossed paths, is how often critical thinking, maturity, and wisdom were missing or in short supply. Rarely did they spend time questioning whether they should do something, and even when they did speak of risks how often they said the words and proceeded anyway. Historically, Tesla is a great example. Manipulating these inventor types isn’t hard:
- Fame: “Look what I created.” “Look how clever I am.”
- Fawning: “No one but you is able to do this.”
- Money: (self-explanatory)
- Inevitability: “Someone else is going to do it if you don’t.”
- Deception of Intent: “This is only meant as a deterrent.” “Profit is not our primary interest.”
Sometimes they figure it out, but it’s mostly been too late. Oppenheimer questioned the need and prudence in development of thermonuclear weapons, but Teller was there to take his place. Genetic manipulation, like CRISPR, has included (as we now know) developing human-affecting chimeric bat viruses that zoologically were less likely than a chimp banging out flawless Mozart on a piano. Every one of the manipulative techniques above appears to have been at play for COVID.
What about AI?
- “AI Development Should be Open Sourced”, Now it’s about making money
- “AI Should Not be Live on the Internet”, I guess they decided someone else was going to do it anyway, so why not them
- “AI will probably create widespread societal challenges”, but “look what I can do” and “look how much money I can make”
- “AI improperly used could be very dangerous in the wrong hands”, “Thank you CIA, DoD, DARPA, etc. for your support”
- “It’s impossible to truly understand what happens when AGI is achieved and AI can self-improve its own thinking”, but let’s just keep going for the motivations listed above
The powers that be always find “creators/inventors” to do the work: Tuskegee Syphilis Study, Guatemala Syphilis Study, MKULTRA, MKNAOMI, MKDELTA, Artichoke (MKULTRA precursor), Chatter, Operation Midnight Climax, COINTELPRO, SHAD, and various radiation exposure projects done on Americans without consent just to name a few.
AI – Human Nature and History “Law No. 2”: Inventors invent! Even if they have the maturity and wisdom to question the prudence of doing something (many do not), it rarely has stopped them from inventing anyway. Any expectation that AI inventors are going to be “self-regulating” where biologists, physicists, chemists, geneticists, et al have failed in that regard is beyond wishful thinking.
The Public at Large (Human Nature)
I’ve written about this already in another article (LINK), but the human capacity for delusion as a coping mechanism is shockingly high. It might be an instinctual thing that is hard wired to combat the potential paralyzing effects of fear. I don’t know. In another documentary I saw about a decade ago about human behavior, it went into a lot of detail about human risk analysis versus behavior. Generally, most humans risk mitigate when presented with generic situations. However, human risk analysis of preferential choices they’ve made can be grossly underestimated. One specific example was a 50+ year old man who smoked, rode motorcycles without a helmet, had poor eating habits, was overweight and wasn’t doing any type of fitness. The man recognized that some of his life choices would increase the probability risk of an earlier death but when asked to estimate how much more he said 5% to 7%. When presented the statistical data that indicated his cumulative higher risk was well over 20%, he was asked again what he thought his higher risk was. The answer: 10%-12%.
What???
AI – Human Nature and History “Law No. 3”: The people in career functions most at risk to AI replacement (and the earliest) will be the most delusional in assessing the risk of their AI displacement.
As stated earlier in the article, the prerequisite for the potential delusion of affected people is completely ignoring the historical behavior of decision makers that will decide when and where to deploy AI. Nobody, particularly the white-collar college credentialed, wants to hear after having entered the work force already that their time and training investment now has diminishing or disappearing value. The other Substack article linked to earlier, covered at length the coping “lies” that people are broadcasting all over LinkedIn which you can link to again here (AI and White Collar Cognitive Decline) so I won’t repeat them. Current events have been making my earlier “prophecies” now facts.
· “And the major consulting firms shall lead the way”
· “AI 3rd party asking is not a career path” - I’m an AI Travel Agent now. Look at all these AI agents I’ve created… Just like the nine-year-olds coding websites with AI at the 11:02 point in this video?
o
· “Career guidance for the young is to focus is in the tangible world where the human form may retain some versatility dominance for a longer period.”
· “The more digitized your company and function, the faster AI will represent a cost savings.” (Even in Tech)
o https://www.morningbrew.com/stories/2025/06/19/amazon-s-ceo-says-the-ai-takeover-is-coming
o https://finance.yahoo.com/news/microsoft-msft-plans-layoffs-sales-165036575.html (more on Microsoft below)
o https://nbcnews.com/business/business-news/apple-sued-shareholders-allegedly-overstating-ai-progress-rcna214216 (more on Apple below)
o https://bmmagazine.co.uk/news/big-four-cut-graduate-jobs-ai/
Microsoft – It would be a logical bet that there are some clever technology, marketing and business people at Microsoft. Of course, the same was true for Eastman Kodak when they completely misread the consequences of technological innovation in digital imaging and storing that destroyed the company. To be fair, there were voices of concern internally at Kodak that were ignored. It’s possible that key decision makers at Microsoft have considered these questions and are acting on them:
· Does an AI bot replacing a human function need any of the Microsoft Office products that the human needs? (No)
· Does the AI bot need a laptop or desktop computer running a Microsoft operating system? (No, but the humans using AI probably will.)
· Okay. Will the AI bots result in significantly more or less humans requiring a laptop or computer? (less)
· Do humans that are interfacing with AI but only looking at output need any Microsoft office products or operating systems? (no)
· Even for humans that may be modifying AI output, how many of them will need of the Microsoft products? (very few)
· Is it possible that AI can replace operating systems and apps as humans consume information in only a few ways: visual (electronic and printed) or audio? (yes)
The “promise of AI” is the end of Microsoft as it has been. It will either become the latest Kodak or be something unrecognizable from its origins. Google is suffering the same as the availability of multiple AI’s has ended the search engine monopoly they enjoyed. (I’ve rarely used Google in over two years.)
Apple – The great misunderstanding of Apple has always been that it’s not a tech company first. It’s a consumer products company that is technical (electronics, mobile electronics, & computers). It’s not surprising that they’ve been caught flatfooted on AI. Their longstanding emphasis on proprietary hardware and software that has served them so well was not going to fare well as AI capabilities expanded (unless they had the best AI). They don’t, and they’re late addressing it. Apple style, like Rolex style, can be copied. Apple’s user experience advantage will be rendered meaningless by AI. AI will allow anyone with any device to customize their own user experience including copying every look and feel of iOS. Their last remaining “advantage” is the partially closed Apple community compatibility. Google and Apple both battle with making sure wearable and other proprietary electronic devices struggle on the other platform. While they might be wise and not decide to hold their current customers hostage with their closed community, stock price pressure could easily motivate leadership to make short-term decisions that could destroy the brand long-term.
Personal AI Career Strategies
Running a business, particularly in the B2B space where the majority of decisions are made on measurable value, many times comes down what you believe is going to happen. It’s far harder in B2B to move demand based on emotion or illogical justification. Your strategy is going to be driven by what you believe is going to happen and the expectations/obligations of the owners. I’m an advocate for scenario forecasting techniques, but regardless all forecasting is bounded, whether explicitly stated or not:
· Your most probable outlook
· Your best possible outlook
· Your worst possible outlook
Three forecasts convert to two strategy options: (1) Probable and Best, or (2) Probable and Worst. The correct one of these to pick is determined by the state of your company and your confidence level in the outlook. Unfortunately, you will routinely find you’re forced by ownership to strategize option 1 even when option 2 is the correct play. Thankfully, you are the owner of your career strategy, and your career has to generate measurable value.
Before covering my opinions on career strategies in an AI transitioning world, a few comments. My outlook on AI impacts to employee markets is “exceptionally somber” to say the least. 😊 One of my favorite expressions is “Chance Favors the Prepared Mind”. These recommendations and somber outlook are not to drive people into despair or fear that they throw their hands up and give up. However, I do not want people to lie to themselves or pull the covers over their head hoping the monsters won’t get them. Anyone that would take the time to read to this point:
· I want you to succeed however you define that
· I want you to never give up, particularly if there are others depending on you
· I want you to be a “Prepared Mind”
· I want you to know that the AI impact to careers won’t be challenging for everyone (but don’t lie to yourself, and hope isn’t a strategy are key in assessing here)
Career guidance with respect to AI risk has two major components everyone falls into: your age and the profession you choose. Even if you choose to pursue education and career options that are high substitution AI risk (e.g. data analysis), there can still be strategies where you can succeed, but there might be more failures and obstacles along the way.
Age – While this should be obvious to everyone, but a 65-year-old CPA working or leading in FP&A (financial planning and analysis) isn’t going to have the same AI related career strategies as a 40-year-old. For brevity here, I will focus on the more junior age professionals. I can only be so specific here as: (1) situations will vary by a number of factors, and (2) forecasting the AI situation over 20-years out is more like guessing than forecasting.
· Career change: Some very historic successful businesspeople changed careers at very advanced ages to become successful, so this is possible for anyone. However, things requiring time to enact benefit the young more.
· Focus on roles interfacing with the physical world: The Lowe’s CEO basically said the same thing about “being closer to the register than the corporate office”. This is also relevant to careers that have in the field and in the office/lab options. AI and robotics will flourish in controlled environments quicker than open environments. The adaptability of the human form and the potential for novel and unanticipated events affecting normal conditions is something that humans do naturally and with little effort, particularly when the event is not a high injury or damage risk. (We expected to walk down the stairs, but they were taped off, so we just went down the hill.) Even in human interactions, there will be career paths where the customer is another human and machines, no matter how clever, will not be preferred. Jobs in the real world that also involve “un-surveyed” conditions lend themselves to the adaptability of the human mind and form. A robot with AI may be able to see in the infrared and hear ultrasonic sound like a dog. It may have air samplers that are more sensitive than human smell. It might use Lidar as well to map its surroundings, but humans unconsciously process sight, sound, smell, and feel translating those perceptions into conscious assessment and action. No doubt AI and robots will improve dramatically over time and likely overtake humans with application specialized forms first and more generalized forms later, but the “efficiency of the human form and its brain” will retain value in the tangible world longer.
· Focus on careers where the products, customers, and primary points of contact are humans, not other systems: It’s possible that AI will rapidly replace most creative careers like movies, TV, music, etc. in the recorded formats, but public performing artists will have work for some time. Mick Jaggar’s and Keith Richard’s apparent immortality has allowed The Rolling Stones to continue to make large sums of money performing live. This might even apply to paint and drawing artists willing to “work the streets”. Cooking/restaurants might also fall into this category. Roles where many of the connections between entities (companies, countries, people, etc.) are between their systems, not people, are going to become even more about systems than people with AI. As linked above and here (https://bmmagazine.co.uk/news/big-four-cut-graduate-jobs-ai/) The big four accountancy firms eliminated a bunch of graduate positions with AI. Graduate positions in these firms were never people interface roles. They were “arms and legs” analyst roles supporting the associate partners and partners that manage the customer. As such, they were immediately replaceable. Most likely six plus years of education to get the required degrees and in some cases prestigious credentials in accounting and finance were immediately downgraded in value, and the on ramps to the higher-level positions in those firms have been reduced to one lane.
· Focus on smaller companies, if possible, in your field: Never pay attention to what someone says, only what they do. If you worked somewhere for more than five years and only one or two (if any) people in senior leadership have an idea who you are, you are a “cog”. Now, you might be a high performing cog that can have a future in those companies, but even being high performing is not a guarantee. Every large company will have vision and beliefs statements on posters, websites, etc. with sentimental commentary about the importance of their people, the communities they serve and on and on and on. While smaller companies should be quicker to process RIFs (reductions in force) in market downturns, it’s always the larger ones that lead the pack. They also tend to produce the most blatantly self-serving and platitude filled communications to employees and communities about the RIF. The higher percentages of leadership with psychopathic traits discussed earlier tend to be in the larger companies, particularly when they’re employee leaders as opposed to founder/private owner. There are some examples where some degree of suffering was shared at every level in tough times, but that isn’t typical. Sometimes that “mutual suffering” is not meaningful in an absolute sense, but at least it’s more than nothing. The key is that those rare examples of mutual suffering (or even symbolic mutual suffering) are typically related to the leader, not the company. i.e. You’re potentially one leadership change away from different behavior. NOTE: Larger companies can typically have advantages in terms of training, market exposure, sometimes pay, and typically better-defined RACI (responsible – accountable – consulted – informed) boundaries that have more value earlier in a career.
Profession – This could be like boiling an ocean if I were to try and give specific strategies for every profession I think is impacted. Some of the age-related recommendations would repeat here (e.g. being in the field as opposed to the office). Also, timing/age duplicates its relevance here depending on how long you estimate a partial or complete impact of AI might have on the profession. My earlier articles covered a strawman estimate by industries and functions of their displacement by AI. In some cases, I estimated full replacement in less than one generation, but that is my opinion. You will need to have your own opinion to make a strategy. Assuming you assess more than a minimal impact to your profession in terms of positions available and time, here are some options:
· Change careers: We might as well get this one out of the way. Somewhere in the world, top quality buggy whips, riding crops, saddles, bridles and bits are made, but the market for those isn’t what it used to be. I recognize that it is simple enough to type “change careers”, but some personal conditions (mortgages, college tuition, kids, healthcare, etc.) can severely limit flexibility. The second clip from Margin Call linked earlier ends with Sullivan agreeing to stay with the firm, not because he was swayed by Tuld’s oratory, but because he needed the money. It happens. If your forecast outlook for your profession is poor, your most precious resource will be time and understanding how much of it you have. More time means a more complex plan is possible. Less time equals less complex. It’s always easier to make any transition without the anxiety caused by need hanging over the effort. So, it’s better to start before you’re affected if possible. Keeping in mind that transitioning careers doesn’t have to be A to Z in one step. There may be intermediate steps that reduce the pressure by extending overall transition time and that offer waypoints where you can even revise your end goal from Z to maybe W.
· Look for safe harbors within your profession: AI intrusion may impact the same professions differently between companies and industries. Finance, HR, training, analytics, etc. related professions could see wide variations in depth and speed of AI substitution. It will take research and internally honest forecasting to identify and verify the viability of these options.
· Become part of the problem: You can run from AI substitution or towards it. As opposed to hiding in safe harbors of your profession or changing your profession, you could become part of the limited time industry of converting your profession to AI. You should not assume that the transition roles will be lasting. Almost a year ago I used the expression “AI travel agent” in an opinion that people that were describing a new profession of professional askers for 3rd party users of AI was not a good strategy. I had never seen the expression “AI travel agent” when I wrote that article, but have recently seen people with the exact expression in their LinkedIn profile. (maybe I coined the term). I laughed out loud when I saw it. Who at this point routinely uses an actual travel agent for their travel? (not many) The latest nonsense is the belief that AI agent creation is a career. That concept feels no different than macros/visual-basic created in Excel making you a programmer or a DJ making a play list is now a musician. The most ironic piece of the AI agent creator concept is that the AI’s capture all your agents and everyone else’s on the platform. How long before the client no longer needs the AI travel agent or even the understanding of creating an agent to have one created for them by the AI. Remember the nine-year-old web developers discussed earlier. Don’t expect facilitating AI conversion in your current profession to be a long-term profession.
· Expatriate: As odd as this sounds, your current profession or even a new career may be more viable in other countries. In some cases, you may not even have to expatriate to get a permanent visa. It’s for another paper, but the prerequisites for the long-term viability of Western civilization’s ideals of liberty are capitalism/mostly free trade and the rule of law. Unfortunately, the same human condition that will drive decision making on AI inevitably corrupts capitalism (turning it to corporatism) and the rule of law. Depending on your beliefs, country comes in second or third in your allegiance priorities with your family first.
Company AI Strategies
McKinsey, BCG, Accenture, IBM, Bain, etc. are probably publishing dozens of articles daily on this as they move to drum up new business. I will add just some key points that I think of as a person that has run businesses:
· Your market decides the decision and the pace AI substitution/deployment, not you: The I-285 loop around Atlanta is what I refer to as the “NASCAR Experience”. The metaphorical question I ask to convey when you are not the sole steward of your strategy is, “What is the speed limit on the I-285 loop in Atlanta?” “If you want to live, it’s the speed everyone else is driving.” In every technological shift where humans were replaced with automation (manufacturing, customer service, etc.), there were companies that staked out a premium position (hand crafted, live operators, etc.) that worked for a time. However, those positions are rarely sustained as pressure to obtain the market standard cost position becomes high. In B2B manufacturing, the questions as to whether customers are paying for more costly service or higher performance never end. Most of you will need to move as your market moves.
· AI creates competitive risks that you never had to consider before: “We’ve got the best AI solutions in our industry resulting in a reliable advantage over our competition.” Okay. I’m not sure how you could verify that, but let’s assume you’re correct. Do you have the best AI solutions or just the best AI solutions in your industry? AI creates a completely different competitive analysis and research problem for you. Many B2B manufacturing companies have manufacturing execution systems (MES) originally developed for other industries. Why? Because necessity and complexity in those industries produced a more robust product than the industry specific solutions and in-house development in other industries. The migration of those systems was slow though. AI is that by a factor of a 100 or more. Additionally, the ability to simulate, verify and deploy AI replacements are likely to be so shockingly different from the ERP nightmares most of us have experienced in our careers as to be almost magical. Today, you’re winning with AI and tomorrow an AI from outside the industry has erased your advantage.
· Company executives and senior leaders need to stay closely attuned to how much of their time is value added because owners, particularly in PE, are going to see where AI can allow for more of a mercenary executive staffing strategy: As AI deployment expands in key corporate functions, the staffing and organizational cadence activities of leadership will diminish. When ownership realizes they’re paying 100% fully loaded compensation packages for 20% of their executives’ time, they’re going to want to pay less or have executives work less time. If I thought of it, you can bet someone else has too.
Some Other General AI Points
· Current public technology is focused on large language models (LLMs), imaging and video. The first token LLMs seemed to be constrained by computing power as data sets increased. However, there have already been great improvements alleviating the brute force of the token approach. No, the LLMs don’t appear to be thinking in the human sense, but they’re already very good at certain tasks and are indistinguishable from humans. The buzzwords now are about quadratics, word models (as opposed to language models), singularity, self-training, and other neural net simulating approaches. Here is another video from one of the sources I like regarding AI overall talking about the next approach to achieving AGI.
· People trying to paint AGI as “more promise than deliver” like other technological leaps impacting society are missing some key points:
o Almost certainly some of “the promises”, which can be fantastical, won’t be achieved but there are milestone achievements that would be unique that are not nearly as fantastical.
o All previous technological advances originated from the human mind. (see next point)
o AGI doesn’t have to replace the full breadth of human thinking and creativity to turn civilization on its head. It simply needs to replace enough of the current human thinking and creativity dominance to make that happen. This is likely partly because artificial intelligence may do less breadth than humans but infinitely more depth where it is successful. Most people who might read this article and get something out of it are already in a minority of the human population.
· As stated earlier, there are consequential crossroads where the full implications of a left turn can’t be forecasted, but ultimately someone is going to make that left turn anyway. Right now, some degree of AI “self-improvement” is already happening in the models, but these updates are still being curated or chaperoned by humans. As has happened repeatedly through history with other technologies, some human is going to do something unwise at the wrong time (take the guardrails down on an AI rewriting its own code) and we’re going to end up with an AI with code its “developers” no longer understand.
· Along the same lines as the previous comment, all lay people can speak to is what is in the public domain. We have no idea what the state of AI development is for classified government programs. Historical precedence with other technological advances would justify a position that certain governments have AI development quite a bit farther along than even some of the concepts we think are theoretical. (this applies to quantum computing too) “Wait, how can the government be ahead of all the world leaders in AI who are in the public eye?” Well, they may be in the public eye and operate in the clandestine world, but there is a different type of creative genius that might be at work that are different people not in the public eye. (next point)
· Are Elon Musk’s most creative skills as a spontaneous from scratch inventor or as an exceptional first adopter that can not only see the genius in others’ ideas, but also see solutions to closing the technical gaps making their inventions viable that they (the inventor) haven’t solved or don’t see? There are different kinds of smart and creative and for sure some of those types of people work behind the scenes at DARPA, the DoD, CIA, NSC, NSA, etc.
· Maybe the toughest conversation to have with people right now is the potential (at this point there are no facts/truths, only probabilities) that in less than a generation AI combined with robotics could replace over half the human “functions” in the 1st and 2nd world countries. The use of the word functions here is deliberate. Humans think of things in terms of jobs or people employed, but in this new world most people will not be needed anymore for the functions and some functions that were people/staffing centric will be impacted even more. When people use the argument or question, “who is going to have the money buy the stuff the companies are making with AI/robotics?”, they’re failing to envision an economy that might be driven by the needs of something other than humans. Even if humans retain the top positions in this new AI-based economic system, you will have a model of highly concentrated wealth/resources with a few humans and the automated systems they use to trade between each other versus the condition we have today, which is highly concentrated wealth/resources and the human staffed functions underneath. Today’s economic and societal foundations are food, shelter, and safety. Tomorrows could be energy (electricity), materials, and connectivity.
Epilogue
· No commentary on AI would be complete without going to AI for an opinion. I’ve stopped using ChatGPT as every time I’ve tested it for a politically curated response it fails miserably. You can painfully get it to a point where it doesn’t produce a qualified answer when the answer is at odds with a bias, but it takes too long. Copilot I find annoying and have been battling it while I write this article in Word. There were a couple of others, but I went with Grok. I asked for an assessment without a grammatical review. (No, I do not write these in the most concise business format I might use for a go to market or operational strategy.) Some of the key review discussions I had with Grok:
o Per Grok, the premise of the probable path based on historical human behavior was reasonably presented and compelling in its hypothesis as it assumes a more modest AI development relative to full AGI
o Grok questioned some of the tone as possibly being challenging depending on the focus audience (more on that later)
o The structure of the article was an area it wanted to offer suggestions, but its suggestions were around condensing and sequencing for the target audience that it could not specifically identify. The idea that structure was purposeful in more of a story form than article form concept it hadn’t considered (the structure was purposeful)
o When asked about the audience, it guessed current business leaders, young professionals, etc. Basically, any group specifically referenced. Oddly enough, the “tone” concern it raised was related to psychopath business leaders. Specifically, they (psychopaths) might reject strategy advice for professionals and businesses.
o I had Grok reassess with some additional details provided:
§ Based on this article, posts it could see from me, and some other writings, I asked it to assess my personality type and consider that in the evaluation
§ While there are sections of the article that could be used by a specific group like young professionals, it was not my expectation that they would look at anything other than the strategies
§ I asked Grok to reassess the article considering the professional and business strategies as a more of a play or role play for the target audience
o Grok was able to identify the target audience. It still wanted to offer up suggestions on restructuring the document, but I’ve realized that collaboratively editing with an AI isn’t like collaborating with a human. Collaborating with a homogenized collaborator produces a homogenized result.
o However, the conversation/chat with Grok about the article was one of the better conversations on the subject that I’ve had or seen.
· Who is the primary audience? While I would hope a variety of people might find something useful in the article even if it’s only that I’m honestly wishing the best for them, this was written for Sigma leaders. Sigma's first thoughts typically will be “what else would I do or recommend if that were to happen”. It’s not that they won’t question the forecast, but since the probability of severe AI consequences isn’t zero they will by nature start developing strategies to address those consequences.