Fact-Checking David Sacks’ “1,000,000Γ— in Four Years” AI Progress Claim

πŸ€– This article was partially generated using OpenAI’s deep research.

Analyzing the exponential acceleration in AI capabilities through advances in models, chips, and compute infrastructure

Venture capitalist David Sacks recently argued that artificial intelligence is on track for a million-fold (1,000,000Γ—) improvement in four years, driven by exponential advances in three areas: models/algorithms, chips, and compute infrastructure. In a podcast discussion, Sacks stated that AI models are getting “3–4Γ— better” every year, new hardware generations are “3–4Γ— faster” each cycle, and companies are deploying exponentially more GPUs in data centers – roughly another 3–4Γ— increase per year in total compute.

David Sacks Explains How AI Will Go 1,000,000x in Four Years

Multiplied together, these trends could yield 100Γ— Γ— 100Γ— Γ— 100Γ— β‰ˆ 1,000,000Γ— growth in overall AI capability within four years. Such a claim is extraordinary, so let’s break down each component and see what evidence says about its accuracy.

1. Model/Algorithm Improvements: “3–4Γ— Per Year”

Sacks suggested that “the models are improving at a rate of… three to four times a year”. By this he meant that AI algorithms are not only getting faster and more accurate, but qualitatively more capable – evolving from simple chatbots into more advanced reasoning systems, with the next leap being autonomous AI agents. Is it true that AI model performance is roughly tripling or quadrupling annually? Evidence from AI research trends supports rapid algorithmic progress, though 3–4Γ— per year is on the high end.

Algorithmic efficiency has been rapidly improving. Research from Epoch AI shows how the compute required to achieve a given performance on ImageNet (a key vision benchmark) has fallen dramatically over time – about a 10Γ— reduction every two years. In other words, modern algorithms can reach the same accuracy with only 1/10th the computing power that was needed two years prior. This equates to roughly 3×–4Γ— efficiency gain per year in that period.

Such gains – due to better architectures, training methods, and scaling strategies – mean that for a fixed amount of compute, today’s models are far more capable than those of just a few years ago. An OpenAI study found, for example, that between 2012 and 2019 the amount of compute required to train a model to AlexNet-level performance on ImageNet fell by 44Γ— (algorithmic efficiency doubling about every 16 months).

Beyond raw performance metrics, today’s frontier models also demonstrate new qualitative abilities that previous generations lacked. Sacks gave the example that we “started with pure LLM chatbots, then we went to reasoning models… [next] the agents”. Indeed, the industry is now striving to move beyond chatbots that can only respond to questions, toward AI “agents” that can plan and act autonomously.

A 2024 McKinsey report observes that we are beginning an evolution from conversational tools “that answer questions and generate content” to AI agents that can execute complex, multistep tasks – essentially, moving from “thought to action”. In line with Sacks’ remarks, current top models like GPT-4 have stronger reasoning and problem-solving skills (e.g. using chain-of-thought logic) than earlier chatbots, and upcoming systems are being designed to use tools, APIs, and memory to act as goal-driven agents.

Do experts back up a “100Γ— improvement in models in four years”? Extrapolating current trends, it’s conceivable. If algorithms improve ~10Γ— every two years (as historical data for some tasks suggests), that would be ~100Γ— in four years. For instance, one analysis found that across many AI tasks, the compute needed to reach a fixed performance has dropped by a factor of 10 annually when combining both hardware and algorithmic advances.

Even focusing purely on algorithmic advances, one report noted that each year since 2010, researchers have found ways to make AI models about 3Γ— more efficient on average. It’s important to clarify what “100Γ— better” means in this context – often it refers to achieving the same result with 1/100th the resources, or conversely achieving much higher performance given the same resources.

By that measure, many credible analyses do support on the order of 100Γ— algorithmic improvement over a multi-year period. OpenAI’s own researchers concluded that “hardware and algorithmic efficiency gains multiply” and that algorithmic progress has outpaced Moore’s Law in recent years.

In summary, Sacks’ claim of rapid model progress is grounded in reality: AI models have been improving extremely fast, roughly tripling in effective performance per year by some measures. However, whether this exact 3–4Γ— annual pace will continue for the next four years is uncertain – it could slow as models approach certain limits, or surprise us with new breakthroughs. Nonetheless, the idea of ~100Γ— better AI algorithms in four years is not outrageous given the recent history.

2. Hardware (Chips): Are New AI Chips 3–4Γ— Faster Each Generation?

The second part of Sacks’ argument is that AI hardware performance is growing exponentially. “Each generation of chips is probably three or four times better than the last,” he said, pointing to NVIDIA’s latest GPUs and innovations like networking multiple chips together with high-speed interconnects (e.g. “Nvidia’s NVL72… a rack system… to create much better performance at the data center level”). Here, Sacks is highlighting both the leap in single-chip speed and the improved scalability of many chips working in concert.

Single-chip performance gains: Modern AI accelerators are indeed delivering massive generational improvements, though 3–4Γ— per generation is an upper-range estimate. NVIDIA’s GPUs illustrate this trend. The NVIDIA A100 (2020) was a workhorse for AI training, but the next-generation NVIDIA H100 (2022, “Hopper” architecture) is vastly more powerful.

NVIDIA’s technical disclosures show the H100 offers up to ~6Γ— greater throughput than A100 on certain AI operations, thanks to architectural advances like new Tensor Cores, faster memory (HBM3), and support for lower-precision math (FP8).

In general tasks, a single H100 can be roughly 2–3Γ— as fast as a single A100 of the same power envelope, and even more if taking advantage of 8-bit precision and sparsity features. This easily meets Sacks’ “3–4Γ— better” benchmark for that particular generational jump.

Looking ahead, NVIDIA’s next architecture (widely expected to be “Blackwell,” possibly a 2024–25 product generation) is rumored to continue this trajectory of major gains. Competing AI chip vendors (such as Google with its TPU v4/v5, or AMD with MI300 series) are similarly pushing for large leaps in performance per generation.

While Moore’s Law (transistor density) yields ~1.4Γ— improvement per year historically, AI-specialized chips are exceeding that by architectural optimizations. It’s not guaranteed that every new chip will triple performance, but recent history (e.g. V100 β†’ A100 β†’ H100) shows multi-fold jumps are achievable.

Next-generation AI hardware is scaling up dramatically. NVIDIA’s DGX GB200 “NVL72” rack is a 120 kW datacenter system that interconnects 72 GPUs (the upcoming Blackwell-generation chips) via NVLink switches. Essentially, this single rack acts as one giant supercomputer: it’s quoted at over 1.4 exaFLOPS (1.4Γ—1018 operations per second) of AI compute at 4-bit precision – enough to handle training for models with tens of trillions of parameters in memory.

Nvidia and its partners claim that one NVL72 rack can deliver the same performance as ~144 conventional GPU servers, while using the same power as those older systems. That’s roughly a 25Γ— performance boost at equal power from advanced interconnect and system design alone. This exemplifies Sacks’ point that it’s not just individual chips getting faster – it’s also how chips are combined.

It’s reasonable to say hardware could improve ~10Γ— (or more) over a 4-year span (~2 generations), which is slightly lower than Sacks’ rough 100Γ— in four years for chips, but close if we include system-level innovations.

It’s worth noting that some of the hardware improvement will translate into cost reduction rather than pure speed. Sacks himself acknowledged that part of the million-fold gain would be “captured in price reductions… in the performance ceiling”. For example, a new chip might offer the same performance at much lower cost or power.

From an economic standpoint, a 100Γ— hardware improvement could mean that what used to require $1 million of GPUs can be done with $10,000 worth a few years later. Either way – higher max performance or much cheaper computing – the progress is impactful.

Current industry roadmaps indicate continued exponential hardware gains at least in the near term. Therefore, Sacks’ claim that the chip side of the equation will provide on the order of 100Γ— more capability in four years is ambitious but not implausible, assuming one more major GPU generation and widespread adoption of advanced multi-GPU systems.

3. Compute Scaling: 100K→Millions of GPUs in Data Centers?

The third factor is the sheer scale of compute deployed for AI. Sacks noted that companies are rapidly expanding the number of GPUs (graphics processing units) in their AI training clusters. He gave an example: “when Elon first started training Grok (his AI at xAI), maybe 100,000 GPUs. Now they’re up to 300,000, on the way to a million… OpenAI’s data center, Stargate… within a couple years they’ll be at 5–10 million GPUs”. These specific numbers raised some eyebrows – are they accurate?

Current GPU deployment at leading AI firms: No company has yet publicly announced having millions of GPUs in one cluster, but the largest players are already in the hundreds of thousands. For instance, Meta (Facebook) has been aggressively building out AI infrastructure – CEO Mark Zuckerberg said in early 2024 that Meta will have purchased about 350,000 Nvidia H100 GPUs by the end of 2024, and altogether will own roughly 600,000 AI chips including older generations.

This gives a sense of scale: one tech giant alone is acquiring over a third of a million top-of-the-line GPUs to power its AI efforts. Meanwhile, OpenAI/Microsoft have embarked on a project code-named “Stargate” to dramatically expand their compute capacity. According to Bloomberg and other reports, Microsoft plans to spend up to $100 billion on AI supercomputers for OpenAI.

The first Stargate data center campus (in Iowa and/or Texas) will accommodate as many as 250,000–400,000 GPUs in its initial phase, with about 64,000 of Nvidia’s next-gen “GB200” GPUs (Blackwell) expected to be installed by end of 2026 in one location. And that’s just a start – the full Stargate initiative envisions 5–10 such giant data center campuses by around 2030.

If realized, that could indeed mean on the order of millions of GPUs dedicated to OpenAI’s work across their facilities. Similarly, Google (which uses its custom TPUs as well as GPUs) and Amazon are scaling up AI infrastructure, and new entrants like Elon Musk’s xAI are investing heavily too.

Musk’s xAI reportedly acquired around 10,000 GPUs initially in 2023 for its “Grok” model, but has much bigger ambitions: Musk hinted that Grok 3 will be trained on 100,000 Nvidia H100 GPUs, a huge step up from the ~20,000 GPUs used for Grok 2. In fact, Musk is reportedly seeking billions in funding to reach perhaps 100k or more GPUs for xAI.

So while Sacks’ phrasing might have blurred some lines (it’s unlikely xAI actually had 100k GPUs at the very start; that figure may include future plans or other projects), the general trend he cites is valid – the top AI labs are racing from tens of thousands of accelerators to hundreds of thousands, and planning for millions.

100Γ— more compute in four years: Sacks’ compute-scaling projection (100Γ— increase in deployed GPUs over four years) is aggressive but not inconceivable given these initiatives. To put it in perspective, Microsoft’s first Azure supercomputer for OpenAI in 2020 had 10,000 GPUs. By 2023, that was upgraded to “tens of thousands” of Nvidia A100 GPUs in Azure (let’s say ~20–30k, a 2–3Γ— increase).

Now, with the Stargate build-out, OpenAI could have on the order of 100k–200k GPUs by 2025-2026 (another ~5Γ—+ increase), and as the later phases complete, possibly millions by 2027-2028. If Meta’s 350k H100 figure is achieved in 2024, that alone is a 5–10Γ— jump from the number of top GPUs any company had just a couple years prior.

So, extrapolating to 5–10 million by ~2027 might be on the optimistic side, but it’s not a wild fantasy if AI demand and investments stay on their current exponential trajectory. It would require enormous capital expenditures (on the order of tens of billions of dollars for hardware, plus building new data centers) and addressing technical challenges (power and cooling for so many GPUs, supply chain limits, etc.).

Not every player will scale that far that fast – it’s likely to be a small handful of projects globally (e.g. Microsoft/OpenAI’s Stargate, perhaps a similar effort by Google or a government lab). Nonetheless, the notion of a 100Γ— expansion in total AI compute over four years is supported by the plans currently in motion in the industry’s upper echelons.

It’s important to note that these millions of GPUs would not be one monolithic supercomputer but distributed across many locations and clusters. For effective training of a single model, there are diminishing returns to adding more and more GPUs beyond a point (due to communication overheads and algorithmic parallelization limits).

However, techniques are improving to scale training across more chips (e.g. efficient model parallelism, sharding, better networking like NVLink switches). If those hurdles are overcome, future AI models (possibly aiming at AGI-level complexity) might indeed utilize hundreds of thousands of GPUs in a coordinated run.

Even if not, having millions of GPUs total means many AI models can be trained or run simultaneously, pushing overall AI capabilities in the world dramatically forward. In short, Sacks’ third claim – exponential growth in deployed compute – is broadly accurate, though the exact numbers he quoted for specific companies are speculative.

4. Combining the Trends: Will We Really See a 1,000,000Γ— Leap?

David Sacks’ conclusion was that when you multiply these three factors – better algorithms (~100Γ—), better chips (~100Γ—), and massively more chips (~100Γ—) – you get a potential 1,000,000-fold increase in AI capability in about four years. Is such a breathtaking gain plausible? It’s an optimistic scenario, but not strictly impossible on paper. Each vector of improvement, as we’ve seen, has some grounding in current trends. However, there are caveats:

  • Independent vs. overlapping improvements: The three factors aren’t completely independent. For example, algorithmic advances often allow using less compute to achieve the same result – which could reduce the need for scaling up hardware. Conversely, the availability of huge compute can enable researchers to discover new algorithmic techniques (as larger experiments become feasible).
  • Bottlenecks and limits: Maintaining 3–4Γ— per year improvement on each front could prove challenging. On the algorithm side, some researchers wonder if we’ll hit diminishing returns on scaling up current architectures. For hardware, there are physical and engineering limits: power consumption, fabrication constraints, memory bandwidth and data movement (which is increasingly the bottleneck at scale).
  • Timeline: Sacks’ timeline of four years might be slightly too short for the full effect. The compute deployment plans (millions of GPUs) seem to be aiming for the end of this decade (mid-to-late 2020s) rather than just by 2027.

The Bottom Line

The notion of a million-fold boost in AI capability in a handful of years is dramatic but not baseless. If you compare a 2023 state-of-the-art AI (say GPT-4 running on an A100 cluster) to what might exist in 2027 or 2028 (a much more efficient model, running on tens of thousands of H100/B100-class chips, with many more parameters and possibly new agentic abilities), the difference could indeed be on the order of 106.

Some of that will be experienced as cost/performance improvements (making advanced AI accessible and deployed widely), and some as pushing the frontier of what AI can do. Sacks’ claim underscores how people often underestimate exponential growth – going 10Γ— every two years compounds to 100Γ— in four years, not 20Γ—.

Many experts would agree that we are in a period of unusually rapid AI progress. As one analysis succinctly put it: “AI models require 10 times less compute to reach the same accuracy every two years… and the computing power applied to AI has been growing >4Γ— per year”. These two trends together yield a >10Γ— annual effective improvement – very much in line with Sacks’ argument.

However, a dose of skepticism is healthy. Predicting technological progress is notoriously difficult. Unforeseen roadblocks (or a leveling off of investment) could slow things down. On the other hand, breakthroughs (like new algorithms, or quantum leaps in hardware design) could accelerate things further.

But as of mid-2025, the roadmap that Sacks outlines – vastly better models, chips, and compute – is broadly corroborated by industry data and plans. If anything, the real debate is not “will AI improve 1,000,000Γ—?”, but rather what will society do with AI if and when it becomes millions of times more powerful than today.

Sacks believes most people underestimate the coming shock; the evidence reviewed here suggests he’s right that a paradigm shift is coming, even if the exact figure of a million-fold can be quibbled. Barring unexpected stagnation, AI circa 2028 will likely make today’s ChatGPT look as quaint as a flip-phone in the age of the smartphone.

Survivor Strategy in Business: Outwit, Outplay, Outlast

πŸ€– This article was generated using OpenAI’s deep research.

How the strategic principles of a reality TV show mirror successful business practices

The reality TV show Survivor is often described as a social experiment in strategy and human behavior. Stranded contestants must “outwit, outplay, outlast” each other for 39 days to win – a process that mirrors challenges in the business world. In both arenas, individuals navigate limited resources, intense competition, and the need to adapt under pressure.

Many strategic principles that lead to success on Survivor – leadership, adaptability, alliances, risk management, and resilience – have clear analogues in company and professional cultures. Below, we analyze each of these parallels with examples from Survivor and real-world business case studies, supported by psychological insights and research.

Leadership: Balancing Task and Relationship

Survivor Lesson

On Survivor, those who jump into a leadership role without building relationships often become early targets. A classic pattern is that a self-appointed “project manager” works feverishly to build shelter or gather food while neglecting social bonds – and ironically gets voted off first.

In one analysis of the show, hard-working tribe members focusing only on tasks were eliminated before the “slackers” who spent time chatting and bonding. The reason is simple: the social leaders on the beach were actually forging trust and alliances, which protected them from being voted out.

This underscores that effective leadership requires more than directive authority; it demands emotional intelligence and team-building. Research in group dynamics supports this: great leaders achieve goals and maintain relationships – “tasks and relationships are both essential in leadership.”

Business Parallel

In business culture, the same lesson applies. A manager solely focused on metrics and deadlines, without fostering trust or empathy, can alienate their team. Companies with high-performing cultures emphasize both performance and people.

Case Study: Microsoft’s Cultural Transformation

Microsoft’s CEO Satya Nadella famously shifted the company from a combative, internal competition mindset to a collaborative “learn-it-all” culture, encouraging growth and empathy over ego. That cultural turnaround, driven by leadership style, has been credited with revitalizing Microsoft’s innovation and morale.

Just as on Survivor a tribe’s morale and unity can determine its success in challenges, in organizations a leader who builds a supportive, values-driven culture will see better engagement and loyalty.

The takeaway: whether guiding a tribe or a corporate team, leadership is an exercise in influence and inclusion – aligning people toward a goal while making each member feel safe and valued.

Adaptability: Navigating Change and Uncertainty

Survivor Lesson

Adaptability is perhaps the most celebrated trait on Survivor. The host Jeff Probst introduced the very first season by telling contestants they “must learn to adapt or they’ll be voted off.”

Indeed, every episode brings new twists – a sudden tribe swap, a hidden immunity idol in play, an unexpected rule change – and the players who thrive are those who can pivot their strategy on a dime. For instance, if a dominant alliance collapses or a trusted ally is blindsided, a savvy contestant will quickly recalibrate, form new partnerships, or shift game plans to stay alive.

Winners often highlight their flexibility; as one Survivor motto goes, “plan long-term, but be ready to change plans overnight.” The Survivor format forces a balance between having a strategy and staying agile in execution.

Business Parallel

The business world is no less unforgiving to those who fail to adapt. In an era of rapid market disruption, companies must continuously evolve or risk obsolescence.

Case Study: Netflix vs. Blockbuster

A prime example is Netflix’s transformation from a DVD-by-mail service to a streaming powerhouse. When technology and consumer preferences shifted, Netflix embraced the change – investing in streaming technology early, revamping its business model, and even later producing original content. This organizational agility allowed Netflix to dominate an industry that once toppled Blockbuster. As one case study notes, Netflix “continuously adapt[ed] to market changes,” setting a benchmark for digital transformation.

In contrast, companies that failed to adapt – like Kodak, which stuck with film in the face of digital photography – illustrate how inflexibility can be fatal.

Adaptability is also a key component of resilience research. Business scholars define resilience as an organization’s “timely adaptation to both immediate and gradual changes in the business environment.”

The strategic parallel is clear: adapt or perish is the rule in both Survivor and business.

Alliances and Team Dynamics: The Power of Trust

Survivor Lesson

In Survivor, no one wins alone. From the outset, contestants scramble to form alliances – small teams whose members agree to protect each other and vote together. A strong alliance provides safety in numbers: if you have the majority on your side, you control the vote and can eliminate outsiders.

Example: Boston Rob’s Alliance Management

In Survivor: Redemption Island, Boston Rob Mariano forged an alliance and famously forbade its members from even talking one-on-one with those outside the group – an extreme but effective way to build loyalty by treating everyone else as a common enemy. His tight-knit alliance, bound by trust and a clear vision, carried him to victory.

More generally, alliances in Survivor are maintained by constant communication and reassurance; players “check in” with allies to quell paranoia and keep everyone on the same page. When alliances break, it’s often due to a loss of trust or someone seizing an opportunity to get ahead (the well-timed betrayal).

Thus, Survivor illustrates both the value of collaboration and the fragility of partnerships in a competitive environment.

Business Parallel

Professional cultures likewise hinge on the power of alliances – both internal teams and external partnerships. Within organizations, high-performing teams resemble Survivor alliances in their trust and mutual commitment.

Colleagues who build genuine relationships and have each other’s backs create a supportive culture that can weather challenges. Leaders are advised to cultivate this trust by being transparent and “checking in” regularly with team members (much as Survivor allies do), which leads to better loyalty and performance.

Case Study: Uber and Spotify Alliance

Externally, companies form strategic alliances to undertake projects that each party couldn’t accomplish alone. A textbook example is the partnership between Uber and Spotify: Uber wanted to improve riders’ experience with music, and Spotify had the streaming technology. By teaming up, Uber provided the user base and Spotify the tech, creating a mutually beneficial service neither could have built alone.

In business, as in Survivor, the strongest alliances are those where both parties benefit and share common goals.

However, there’s also a parallel in knowing when to re-evaluate alliances. Just as a Survivor player might decide to “flip” on an alliance that no longer serves their path to the end, companies must periodically assess their partnerships and team dynamics.

The key is that whether you’re on a tropical island or in a boardroom, relationships built on trust, open communication, and aligned interests are a cornerstone of survival and success.

Risk Management: Balancing Bold Moves and Caution

Survivor Lesson

Every move in Survivor is a risk calculation. Should I vote off a strong ally before he turns on me? Should I play my hidden immunity idol now or save it? Bold, risky moves can yield big rewards (such as blindsiding a major threat), but they can also backfire spectacularly.

Conversely, playing too cautiously can be a slow death sentence – if you never take initiative, you may end up carried to the end with no resume, only to lose the final jury vote.

As one entrepreneur observed, “Survivor is all about taking risks β€” forming alliances, hunting for immunity idols, and deciding when to play them.” The winners tend to be those who take calculated risks at the right time.

For instance, a contestant might risk angering someone by flipping alliances in order to break up a rival power bloc, calculating that the long-term benefit outweighs the immediate fallout.

On the other hand, a notorious example of poor risk management on Survivor is overplaying: contestants who made one move too many (a premature blindside or a needless gamble) often found themselves voted out once they burned trust.

Thus, successful players treat risk like a resource to be managed – they neither shy away from it nor gamble aimlessly, but rather weigh the odds and potential payoff of each decision.

Business Parallel

Entrepreneurs and businesses face a similar tightrope. Pursuing innovation or growth invariably involves risk, but failing to take risks can mean stagnation. As Facebook’s founder learned early on, “in a world that’s changing so quickly, the biggest risk you can take is not taking any risk.”

In corporate strategy, this translates to a need for bold vision. Companies that never venture beyond their comfort zone – sticking only to legacy products or markets – may survive in the short term but eventually lose out to more daring competitors.

Yet, uncalculated risk is just as dangerous in business as it is in Survivor. That’s why modern enterprises emphasize risk management: identifying potential threats and opportunities, analyzing them, and making informed decisions.

Case Study: Amazon’s AWS Gamble

When Amazon decided to launch Amazon Web Services (AWS), it was a significant risk – entering a new industry (cloud computing) far from its core online retail business. Many questioned the move, but it was a calculated risk based on the growing need for cloud infrastructure. That bold bet became one of Amazon’s biggest growth drivers.

In contrast, think of a company like Kodak, which famously avoided the risk of pivoting to digital photography (despite inventing one of the first digital cameras) for fear of cannibalizing its film business. That reluctance to take a necessary risk proved fatal when digital tech left Kodak behind.

The optimal approach in both contexts is strategic risk management – know when to stick with the safe choice and when to strike out boldly.

Resilience: Outlasting and Bouncing Back

Survivor Lesson

Survivor is as much a mental and physical endurance test as it is a strategy game. Contestants are subjected to hunger, fatigue, and the stress of social conflict. The ability to persevere through hardships – to stay focused and positive after nights of rain on a bamboo shelter or after being blindsided by allies – is often what separates the winner from those who quit or implode.

In psychological terms, Survivor demands resilience and grit. Research on high achievers has found that grit (passion and perseverance) and resilience (adaptability to change) are key predictors of success.

Example: Chris’s Comeback

A famous example is in Survivor: Edge of Extinction, where a contestant voted out on Day 8 (Chris) refused to give up, endured on a punitive exile island, earned his way back in on Day 35, and ultimately won the game. His journey epitomized resilience – the capacity to bounce back from failure and still triumph.

Even in regular seasons, many winners face moments of adversity (like losing allies or facing a minority position) but find ways to regroup and push forward. This quality of mental toughness and flexibility under pressure is highly prized; as the saying goes, Survivor isn’t just about outwitting and outplaying, but also outlasting.

Business Parallel

In the business world, resilience is equally vital. Companies encounter crises, disruptions, and failures regularly – from economic recessions to supply chain breakdowns to global pandemics.

Organizational resilience refers to a company’s ability to absorb the shock, recover, and even come out stronger. A useful definition is “the ability of an organization to anticipate, prepare for, respond to, and recover from adverse events… it’s about adaptability and the strength to bounce back.”

Case Study: Pandemic Adaptations

Companies that had prepared contingency plans and adaptive processes proved far more resilient during events like the COVID-19 pandemic than those that were caught off guard. For example, many restaurants and retailers that quickly pivoted to online ordering, curbside pickup, or other creative solutions in 2020 managed to survive or thrive, whereas others that couldn’t adapt had to shut doors.

Resilient organizations typically foster a culture of learning and flexibility – they treat setbacks as learning opportunities and encourage problem-solving at all levels. This parallels Survivor’s resilient players who treat getting blindsided not as game over but as a wake-up call to adjust their play.

In sum, resilience in business means creating a company that can outlast hardships, much as the Sole Survivor is the one who withstood everything thrown at them.

Conclusion: Outwitting, Outplaying, Outlasting in Business

Though Survivor is a televised game and business is real life, the strategic parallels are striking. Both domains involve people working together (and at times against each other) under pressure, requiring a mix of savvy strategy and social finesse.

Key Strategic Parallels:

  • Leadership: Culture and relationships can trump raw efficiency – a lesson every manager should heed in building team cohesion.
  • Adaptability: Markets and workplaces, like the game’s twists, change rapidly, favoring those who pivot rather than cling to old plans.
  • Alliances: Trust and collaboration are powerful assets, whether you’re forging a voting bloc on an island or a cross-functional team in an office.
  • Risk Management: Both contexts call for courage balanced by caution – the wisdom to take bold action when warranted, but always with a calculated understanding of consequences.
  • Resilience: The endurance to withstand setbacks and come back stronger is essential in both arenas.

Professional and company cultures can draw direct inspiration from Survivor. For instance, promoting open communication and trust in a company mirrors the ally-building on Survivor that keeps tribes strong. Encouraging a mindset that treats challenges as opportunities to innovate is akin to the Survivor mentality of adapting to any twist.

Even the endgame of Survivor – persuading a jury of peers to reward you – has a business parallel in maintaining one’s reputation and integrity; ultimately, success is decided by people’s trust and perception of you.

By examining Survivor through a strategic lens, we see a microcosm of competitive strategy and team psychology. It reinforces lessons backed by management science and psychology research, but in a visceral, entertaining way.

As one business writer noted, the skills and traits that win Survivor are “crucial in the business world” – a testament to how art can imitate life. In both Survivor and business, those who lead wisely, adapt quickly, cultivate allies, manage risk, and demonstrate resilience are the ones most likely to thrive.

Sources

Leadership Sources

Adaptability Sources

Alliances & Team Dynamics Sources

Risk Management Sources

Resilience Sources

Reflecting on 2018 & Charting New Paths for 2019

I’m thrilled to unveil my brand new interactive experience: My 2019 Resolutions & 2018 Year in Review. This project became much more than just a simple trackerβ€”it evolved into a meaningful journey of reflection, celebration, and intentional goal-setting for the upcoming year.

Screenshot of Jeremy Person's interactive 2019 Resolutions & Year in Review
Click on the image to explore the interactive experience

Why I Created This

Looking back at 2018 helped me appreciate how far I’ve come, while planning for 2019 gives me clear direction and purpose. The process of creating this interactive tool made me more intentional about both my celebrations and aspirations.

What did you accomplish in 2018 that made you proud? And what dreams are you chasing in 2019? I’d love to hear about your journey in the comments below!

Join the Conversation

What reflection tools do you use to look back on your year? How do you set meaningful goals for the future?

A Brief Encounter: My Memory of Anthony Bourdain

A Moment of Recognition

If you have ever had a conversation with me and Anthony Bourdain is mentioned, you know I once saw him at a Phoenix airport. I glanced up while waiting for my flight and saw Bourdain walking down the terminal alone. I almost immediately knew it was him.

I suspect he could detect that in my eyes as we exchanged glances. He gave me the short, “I know you know who I am and thank you for leaving me alone” sort of glance, and he continued on his way. I will forever remember Bourdain as someone always moving forward but seemingly wanting to be left alone.

Unfiltered Honesty with Unwavering Compassion

I respected Bourdain because very few people in life have the ability to courageously speak their mind while remaining open-minded. Society is enamored with the unfiltered “what will they say next” personality until they say something over the line. Bourdain seemed to always understand that line and gave us just the right dose of reality when unpacking lessons learned during his world travels.

Christina spent some time in Nicaragua and I love hearing her stories. One of the stories she told me is what she saw during her weekly travels to La Chureca where she saw some unmentionable things even Bourdain couldn’t show on television. During the segment, Bourdain seems understandably troubled by what he was seeing as well.

Anthony Bourdain Nicaragua Clip

“He had a softer spot for kids. During a Q&A at the Prospect Park food festival Googamooga in 2012, Bourdain called on my 9-year-old daughter, who asked the best way to cook a unicorn. Not missing a beat, Bourdain smiled and shrugged: ‘Rare.'”

His Enduring Legacy

Bourdain will be missed by many, but especially me. His ability to connect with people across all walks of life, to showcase humanity in its rawest form, and to tell stories that needed to be told has left an indelible mark on how we see the world. Perhaps what made him so special was his genuine curiosity about others and his willingness to be changed by his experiences.

Bourdain’s Approach to Understanding the World:

  • Break bread together β€” Food as the universal language that opens doors
  • Listen to each other’s stories β€” Genuine curiosity as the foundation of respect
  • Recognize our shared humanity β€” Finding connection across cultural divides
  • Speak honestly β€” Truth as a form of respect for your audience
  • Be willing to be changed β€” Approaching the world with an open mind

In a world increasingly divided, we could all learn from Bourdain’s approach: Break bread together, listen to each other’s stories, and recognize our shared humanity.