The Dark Side of Data

With my recent move to a Data Science team & acknowledging that my recent reading has been overwhelmingly data-optimistic, I thought I’d take a recommendation from an analyst colleague of a book that is quite the opposite – “Weapons of Math Destruction”, by Cathy O’Neil. Below follows my summary of it, followed by a one-page nine-question anti-WMD framework inspired by the book, and finally a hypothetical worked example of applying the framework to an algorithm in the RAF (Machine Learning in Promotions). The views expressed mainly reflect content provided by Cathy (and hence there are predominantly US examples, rather than UK – not all will be generalisable to the UK) with a couple of my personal observations – none represent my employer’s views.


“Weapons of Math Destruction” is written by Cathy O’Neil, a data scientist who draws on both her own personal experiences working in e-commerce & a number of illustrative case studies to outline how big data and the algorithms powered by it are increasingly being used as “Weapons of Math Destruction” (‘WMD’ for short). 

In short, not every algorithm or model (if these terms are confusing, see below) is a WMD – only those that cause harm, usually on account of three hallmark features: their opacity, scale & the damage they cause. This damage seems most acutely felt by the least fortunate, usually unknowingly, and often seems to perpetuate vicious cycles of poverty, inequality, and incarceration. Let’s dive into some examples.

Here’s some definitions, for reference

Assessing Teachers’ value-add

One of the first and most shocking examples of a WMD Cathy cites is Washington DC Schools’ “IMPACT model”. It came about in 2007, as a result of plans DC’s mayor set out to turn around the city’s underperforming schools. Their hypothesis was that a key reason for schools underperformance was a small minority of poor teachers. It followed that by identifying and firing the worst teachers the average quality of teaching would rise. Though pretty uncompromising, it doesn’t sound like an entirely maligned plan. 

To implement this plan, they collected data on teachers performance so they could identify the bottom 2%. Their algorithm allocated 50% of it’s score on the teacher’s “value-add” to students. Put simply if a child predicted a B gets an A, the teacher has a positive “value-add” score and vice versa if the child who is predicted a D gets an E. 

Though this sounds an intuitively reasonably fair way of assessing teacher quality, it turned out to be anything but, leading to such crazy variances in scores that one teacher went from scoring 6/100 one year to 96/100 the next. The teacher in question clearly had not changed from a complete liability to once-in-a-generation genius in the space of a year, so what had gone wrong?

Firstly, this is a great example of how big data is often used in an overly-reductionist way – associating a complex phenomenon (i.e. kids’ educational attainment) with a single factor (teacher performance) simply because it’s convenient to do so (or ignorance, perhaps). Kids educational attainment is of course in part due to teachers, but it’s also due to the child’s personal motivation, mental health, curriculum design, home circumstances (a reviewer of this article kindly pointed to many studies confirming this which are summarised here) and a great number of other things. Instead in this case, even though teachers’ jobs were on the line, the algorithm wasn’t made to account for the complexities of real-life, real-life was reduced to an overly-simplistic score to suit the purpose of efficiently sacking 2% of teachers a year.

Secondly, this faux-mathematical approach is laughable statistically – on account of it’s inappropriate statistical power. Big data is more statistically powerful at making predictions when the data is, well… big. Small samples (i.e. a class size of 20-40) do not qualify as big data and can often display large variance over time  – meaning that one year a teacher might have a dreamy class of highly engaged students whom they’re able to add significant value too, whilst the next class has a disruptive child and a couple of children with difficult home circumstances meaning suddenly the balance is tipped. Should an algorithm be sacking teachers who have tough classes this year? What a perverse incentive to put teachers off going to the more challenging schools where they might truly be able to make a difference because there is a risk that an algorithm will lead to their sacking in year one. 

These two features combine to make the algorithm harmful (trait 1 of a WMD) to teachers – they could be sacked because of an overly simplistic algorithm and just having a tough class one year. Imagine the number of good teachers who lost their confidence ever to teach again after being falsely labelled as in the bottom 2% of teachers. Unfortunately when this did happen, it happened at scale across the entirety of DC (trait 2 of a WMD). Though there was of course uproar, teachers had no way to challenge the decisions because the algorithm’s opacity (trait 3 of a WMD) – justified on account of the algorithm being ‘proprietary’. This meant teachers weren’t able to delve deeply into why exactly they had been sacked nor, therefore, appeal the decision.

Race and the poverty cycle

Perhaps one of the most topical subjects that comes up time and again in this book are WMDs that unfairly harm the poor and ethnic minorities. Below, I’ll try to bring a few of the examples given into the book into a single narrative.

Let’s imagine you’re an 18-year old black male from a rough neighbourhood in the Bronx.

Through no fault of your own you find yourself with little money, a low standard of education and little in the way of prospects. If you were asked who was to blame for your situation, I doubt algorithms would so much as cross your mind.

You wake up in the morning & check your social media – as usual, ads for payday loans are plastered all over your newsfeed – you probably think this is normal, that everyone else sees these ads, but actually you’re selectively targeted for these based upon your demographics (presumed low educational attainment, low level of savings & income). Just a few blocks away in the nicer part of town, little to your knowledge, the lads your age don’t ever see those predatory loans – they get ads for low-interest rate loans from the reputable banks their parents set them up with as children. 

You lie in bed daydreaming. You’ve always wanted a car and without one it’s hard to get a better paying job outside of the neighbourhood, and your savings will almost cover it. You deliberate the decision for a while then later that week you find a car that fits, take the loan and buy it. 

Unfortunately, you hadn’t really accounted for just how expensive insurance is – way more than your white middle-class friend from high school. Perhaps he was exaggerating the deal he got? Unfortunately, unbeknownst to you, this is because many car insurance quotes in the US take into account your credit rating. No doubt when the insurers built their algorithms, they felt that credit rating might be a proxy (a surrogate) for reliability, or that it correlated to insurance claims in some way. But this misuse use of a proxy prejudices against the poor, as they typically will have lower credit ratings – but why would being poor make you a worse driver? Cathy cites one shocking example where one study found that car insurance was far more expensive for someone with a completely clean driving record & a low credit score than for someone with a drink driving conviction & a high-credit score. 

Having taken out another payday loan to cover the insurance, you miss a payment – you know your credit rating will take a hit, so you seek to find a job ASAP to get yourself out of this spiral – it should be easy now you have a car and can be more flexible. You apply for dozens of jobs in the local area, being rejected from many of them. Though you put it down to your lack of experience in the service industry, an algorithm may have more to do with it than you think. Many employers take into account your credit score (derived from an algorithm that is roughly 35% payment history, 30% amount owed, 15% length of history, 10% new credit, 10% types of credit used) as part of their application process as a proxy for how reliable you are – presuming that if you can keep up with your loan repayments, you’ll be reliable with work. Unfortunately this often then rules out the people who most need jobs (those in debt) from getting them – and highlights the scale at which this algorithm (the credit score) unintentionally creates harm by propagating the circle of poverty, whilst you probably don’t even realise it’s happening.

Finally you secure a job in a famous-coffee-chain I won’t name in this blog. It’s not your dream job, and you’ve only got 24-hours a week but hey – that’s fine, because you’ll get a second job to fill in the time and supplement your earnings. 

Unfortunately this proves impossible – because your schedule seems horrendous. Damn the boss must hate you – or perhaps, you ponder, it’s some tactic to weed out those who aren’t committed? You seem to only have shifts at opening time (0500-0900), closing time (2000-2200) and seemingly random day shifts usually at weekends. 

You guessed it, our-famous-coffee-chain uses an algorithm for it’s scheduling – which optimises for profits by minimising staffing, responding dynamically to historical demands, weather forecasts, and local events. This means you only know your shifts a matter of a couple of days in advance and most shifts fall around the opening up/commuting times and then closing up time, despite the fact that this schedule is exhausting, precludes you getting another job, and forces two commutes a day. You raise it with the manager, but he points out that at such fine margins, they need a mathematical model to dictate scheduling – it wouldn’t be cost-efficient to keep you all in over the quiet mid-morning or mid-afternoon periods. The mathematical model has been ruthlessly trained to optimise shift patterns for cost-efficiency at the expense of staff wellbeing & retention. And why would it? There’s a queue of replacement candidates at the door should they need them.

You’re exhausted. Weekends are written off with opening and closing shifts, you get 5 hours sleep between these shifts, and you’re only actually working 4-6 hours a day so the pay is terrible. You can’t take on a second job due to the unpredictability of the shifts and your monthly repayments on the loan seem to take barely anything off the amount outstanding since the interest rate is sky high.

You finally have an evening off. One of your friends is hosting a little get together – a few beers whilst watching the basketball. You take a couple of your dad’s beers from the fridge, stuff them in your bag, and make your way over to his house just around the corner – cracking your bottle with excitement and taking a few hard-earned sips on the way over. 

Unfortunately on the way, for the 3rd time this year, you’re stopped and frisked by the police. This isn’t an isolated case – a study by NY Civil Liberties Union found that though 14-21yr old black & latino males made up ~4% of the population, they accounted for ~40% of the stop & frisk checks. 90% turn out to be innocent, but many of the remaining 10% are done for petty crimes such as underage drinking or carrying a joint. Crimes that rich white kids do every weekend in college frat parties, but never end up charged for.

You’d be forgiven for thinking that there was no model underlying this, but unfortunately you’re probably wrong. Police departments (like almost any other public department) have to be careful with their spending, focussing it as efficiently and as effectively as they can on tackling crime. Recently, some police forces have turned to predictive modelling software to direct their policing efforts to where the most crime is. Though this sounds like a good idea, let me just play through how this actually pans out. A rough predominantly black neighborhood will have more historic crimes in it – partly due to it being a low-income high-unemployment area, and partly some might argue, due to historic Police bias towards black people. The model is trained on this data and therefore focuses police attention on this area. 

Now, of course policing this area more and conducting more stop-and-frisk searches increases the number of “hits” for the model – the number of crimes picked up. All whilst in the rich white area people may be abusing class A drugs, underage drinking or perhaps vandalising property these crimes aren’t picked up, because less policing is focussed there – due to the algorithm. So the algorithm ends up feeding it’s own sick version of reality – focussing police efforts on the poor neighbourhoods and picking up large numbers of often petty crimes whilst ignoring the better off neighbourhoods – training a model that is increasingly unbalanced towards the poor neighbourhoods. That’s not even to mention that the more petty offences punished within these neighbourhoods, the more incarceration, the less the prospects of locals ever getting a job and breaking the poverty cycle.

Back to the story… so after you’re stopped you resist arrest by running off – in hindsight it’s a bad decision but in the moment you’re terrified at the thought of criminal record on your employment prospects. You’re apprehended though, and end up in a courtroom the next week. You know that historically race has played a major part in sentencing – sentences imposed on black men in the federal system are 20% higher than white men convicted of similar crimes – so flag it with your legal aid. Not a problem anymore, she replies, there’s an algorithm to aid in sentencing (at the time of print of WMD these were being used in 24 states) to make it more fair.

The problem is though, is that these algorithms are opaque to the people they are being applied to, exist at scale and cause real harm – by perpetuating the cycle of poor, and largely black, people being incarcerated. 

How? Let’s explore the example of the “LSI-R” (Level of Service Inventory – Revised). Prisoners fill out the questionnaire and a simple algorithm (totting up scores from the different questions) then judges them to be either low-, medium- or high-risk of reoffending if released – and this, in some states, is used to determine sentence lengths. Though you’re not overtly asked your race, imagine how different the results would be in this risk score for a white middle class man vs a black working class man, regardless of crime. One question asks about “the first time you were ever involved with the police” – the white guy has probably never been stopped and frisked, the black guy has quite possibly been stopped and frisked a number of times that year, just because of his neighbourhood & colour of his skin. Another question asks whether any of their friends or family have criminal records – which is, of course, more likely for a black person from a poor neighbourhood than a white middle class person. So instead of judging your sentence length purely on you and your crime, your sentence length reflects a system which makes it ten times more likely to be stopped and frisked as a young black man and something else you have no control over – your friends and families criminal records.

What you measure you become

Having spoken there about the inappropriate use of proxies to judge the likelihood of an outcome, such as whether your friends/family have a criminal record being in part used to determine your sentence length, I want to explore one other example Cathy gives of proxies damaging the University education of all members of society.

Back in the 1980’s, a second-tier news provider “U.S News”, decided to run a new feature – university rankings. Their journalists, who we can probably all agree should not be defining the shape of national education, defined 15 proxy metrics for measuring the quality of US universities. They then took these, mashed them together through an algorithm they’d designed on the back of a fag packet, and outputted a national ranking. 

The ranking became extremely popular, and became living proof (albeit a twisted and unfortunate truth) of one of my maxims – “what you measure, you become”. 

Now, one might argue that this could only have driven up the quality of the educational offer. Let’s explore why this hasn’t been the case in the US.

First of all it’s important to acknowledge just how crucial these rankings became – slip in the rankings and suddenly universities would find themselves in a vicious spiral – a lower ranking meant lower quality of applicants, lower pull of high quality professors and less money donated by alumni to their alma mater. Over the coming years their ranking would be in free-fall. The opposite effect could start a virtuous cycle to the elite. And so Universities poured extraordinary quantities of cash into upping their ranking scores – focussing on optimising their offering to satisfy the ranking model – a model, just to recap, not informed by academics or even students themselves, but by a few journalists in a second-tier news publication – designed to sell papers.

The first point to stress about the pure madness of this ranking system is that proxies are not necessarily directly related to the outcome which they intend to predict. To give but one example – one of the fifteen proxies used for the US news ranking system was the “SAT scores of incoming students” – higher average SAT scores of incoming applicants was believed by the journalists to reflect a higher quality of candidates – and thus bumped up your ranking. The problem is of course two-fold with this – firstly, those at poor state schools may be very academically talented but score relatively poorly in the SAT compared to those in expensive private schools – use of this proxy motivates universities to bump up the minimum SAT score requirement, sacrificing diversity and the opportunity for very bright individuals from less well-off backgrounds to attend their universities. Secondly, this motivated some absolutely crazy responses – Baylor University paid for all of their incoming students (after they’d been accepted!) to re-sit their SAT, to see if they could get a better score – imagine the sheer cost of administering this, for absolutely zero educational benefit, simply to boost the ranking score – to feed an algorithm.

And who foots this cost, who bears the harm of this algorithm entirely unknowingly? With US university costs going up by 500% from 1985-2013, it’s obvious – the students do. And again this leads to the poor disproportionately missing out on the opportunity of a decent education, put off by the promise of extreme debt that is often simply funding universities desire to satisfy an algorithm. Is cost one of the 15 metrics by which universities are ranked by you ask? Of course not.

As a side note (not from the book) –  this ranking system might indirectly lead to the end of tens or hundreds of universities over the next year or so, hastened by the COVID pandemic. The current edtech (education tech – such as massive open online courses – ‘MOOCs’) revolution means that students can now get a world-class education from the comfort of their own home for a tiny fraction of the price – and with social distancing rules, universities can no longer offer the face-to-face added value you paid over the odds for previously. These “MOOCs” are undercutting, by a frankly gargantuan amount, a university system which has optimised itself to satisfy proxies that are ill-related to educational quality whilst utterly ignoring cost. It seems likely that many of them, with the exception of those who have a strong brand or unique offering, might struggle to survive.

So what would have been a better way to do this? Well, the Obama administration did try to create a rejigged ranking system, but the pushback was fierce – these universities had spent years trying to orient around these metrics, after all. So instead the US education department simply released a whole load of data on each university online, so students could ask the data the questions that mattered to them – things like class size, employment rate post graduation, average debt held by graduating classes. It’s transparent, controlled by the user, and personal – Cathy labels it as ‘the opposite of a WMD’.

Developing an algorithm checklist

Hopefully, these cases have illustrated how mathematics – whether dressed up as an ‘algorithm’, a ‘model’, ‘machine learning’ or ‘AI’ – has the potential be harmful & opaque, at scale. However, there is no doubt in my mind, that such models/algorithms can also be very useful and will only increase in their use – just check out my podcast on AI & book review of ‘AI Superpowers’ for some of my thoughts on the positive uses of mathematics. 

So how can we build great models, whilst being cognisant of the traps that we might fall into? I’ve tried to make my own little 9 point checklist for doing so (see overleaf, printed on a separate page so you can print it and pin it on your wall if that’s your thing) and in the next section we’ll road-test it with an example.

RAF case study – Machine Learning in the promotions board

Finally, let’s work through an example of putting these principles into action – taking the use of machine learning (ML) in promotions, an idea that has been explored by some large organisations. I stress that this is absolutely NOT a criticism of this approach to promotions – I’m actually beyond delighted that we’re exploring the use of cutting edge technology to aid in HR processes – nor do I know the ins and outs of what the results of these explorations were – I will just explore a hypothetical example of what might happen.  Let’s use the checklist.


1a. Why is the algorithm necessary?

Well, RAF promotions are decided by a board, based upon individuals’ scores on their yearly appraisal, and two portions of free-text written feedback by their boss and boss’s boss. Reading the many thousands of reports is very time consuming, so if an ML algorithm can analyse the reports and sort them into a rough ranking, the process might be more time-efficient (saving £’s), but more importantly it will give the humans on the board more time to evaluate the borderline cases (improving promotion decisions).

1b. What are the assumptions that the algorithm relies upon (document proxies and the confidence you have in their relationship to your ultimate goal)?

The algorithm does of course rely on some assumptions, two of which I’m going to take time to point out:

  • Firstly, that sentiment analysis (the likely predominant means of ranking these reports) of the written report is an accurate proxy variable in predicting how appropriate a person is for promotion. This may be a stretch… as simply having a more verbose boss who is more superfluous in their description of you could increase your chance of promotion, based upon the principles of sentiment analysis. 
  • Secondly, that training the algorithm on the past 4 years of reports would predict the type of people we want to promote next year. This of course could be completely untrue too – what if we decided that this year, we wanted more technical people taking leadership roles as we transformed to a more technologically capable air force. Of course we’d need to build in a corrective aspect to the algorithm that sought technical competence.

To prove these points, let’s run an example. In the RAF, we are looking to promote those with a technical background, as we seek to become more digitally enabled. Unfortunately, though a human can tell that candidate A might have the right skills to be eligible for promotion by reading this is her report: “turned around a desperately struggling IT department and kicked off cloud migration” – an online sentiment analyser get’s it completely wrong – giving a 60% confidence that the sentiment of this sentence is negative (see below screenshot). Candidate B might be an average IT worker who is rugby obsessed and so his report contained “did a phenomenal job of leading a exceedingly successful local rugby team”, the sentiment analyser believing there’s a 94% chance this is positive (see screenshot). So our sentiment analyser flunks the candidate who’s transformed an IT department, and promotes the rugby-nut up the pecking order. Of course the way the algorithm was implemented would have to done in a way which guarded against this.

The sentiment analyses high-scores the rugby nut and believes the IT guru has a negative report!

1c. Are there any features that will perpetuate existing biases? (illustrate thoroughness by explaining the results of a variety of example user-interactions with the algorithm)

There are two obvious concerns that spring to mind. Firstly, we already know that the RAF higher ranks are predominantly white and male – and many would argue that this lack of diversity harms us. One would like to hope that there is no bias against either in promotion boards, but we would have to be conscious that if there was, an algorithm might be trained to adopt this bias too. Even if no human biases toward white males existed in the training set, just consider the raw numbers of representation – it’s not difficult to imagine the situation when the algorithm notices that only 5% of the reports it read containing the world womens team for [x]” get promoted, whereas 40% of those with the term mens team for [x]” get promoted. Unless corrected for, the algorithm might now incorrectly take this as a causal correlation (i.e. gender determines promotion probability), instead of simply being a representation of the male-heavy gender split within the RAF. To address this, the report’s would need any gender indicating words (i.e. womens/mens, she/he, female/male) replaced with a none gender-specific word.

The second concern that springs to mind is that if the algorithm is trained to favour those with glowing and strongly positive reports (a feature of sentiment analysis algorithms, which might well provide a large part of the score) it might bias against those whose report is written by a boss whose first language isn’t English, or whose boss has a tendency to be understated. Imagine the Gurkha regiment, for example. A native Nepalese Gurkha Officer writes an understated report for a bright young female officer, which is even less glowing since the Gurkha superior has a limited arsenal of positive complimentary words (i.e. exceptional, superb etc.) – the young officer barely gets a look-in on the promotions pile despite high performance, simply because the algorithm biases against those whose bosses are non-native english speakers and less verbose.

As per the wording in 1c, a key process to guard against these biases would be to validate the algorithm on past data, to see how it performs – critically looking at its performances on edge cases – how the algorithm scored the female Gurkha officer above, or a tough-upbringing black female in the engineers compared to a similar performing white male from Eton in the Rifles, amongst many other examples. Any discrepancies should be addressed at this point in the algorithm’s design, rather than when it’s been let loose on the career prospects of these people in real life.

1d. How will a feedback loop be created to correct for mistakes made by the algorithm, or the requirement to shift to a new decision paradigm?

Needless to say, regular human reviews of the algorithms performance, particularly on edge cases & appeals will be required. This would allow dispassionate recalibration of the algorithm should it be found to be selecting against diversity (one reviewer [PhD in AI] noted on a draft: “Diversity in data is key! More data doesn mean better algorithms, more diverse data does.”), or also in response to changes in people strategy (i.e. “we want a greater representation of people with technical skills to be promoted”). The strategy would also help identify those who were gaming the algorithm (1e: Could you game the algorithm, and if so, how can this be prevented?), so those loopholes could be closed (if it is feasible that is – one reviewer noted that those writing performance reports can already game a non-AI promotion system by writing overly-positive reports for underperformers – the algorithm cannot be held to impossible standards).


2a. Is there an easily-navigable appeals process for judgements made by the algorithm?

2b. In responses to appeals, is the algorithms’ decision process revealed & can it be realistically understood by someone with GCSE-level maths?

Though a requirement for transparency seems obviously essential, it’s oft-neglected in the name of maintaining ‘proprietary’ algorithms – protecting profits at the cost of causing harm. Ensuring that the bright young female gurkha officer can appeal after finding that her less able white male peer from the Rifles was promoted, is crucial. 

The process for appeals will ideally need to be automated – providing a quick & transparent account of how the process had ranked her, including the contribution of the algorithm at a level that doesn’t gloss over the details nor baffle the recipient (i.e. understandable to someone with GCSE maths). This sort of system HAS to be developed before release, because it will be challenged – and any mess that results in the case of a poorly handled case will be jumped all over by the press – and rightfully so. If the challenge is maintained after a transparent response – a human HR specialist will need to explore the case manually at pace. Not only will this be critical to maintain confidence in the system and revisit cases where the algorithm’s contribution may have been wrong, but it will also be critical to closing the feedback loop – if the algorithm made a mistake it should be altered to avoid repeating it.


3a. Does the implementation plan incorporate a representative pilot study before roll-out, which compares the algorithm with humans and robustly demonstrates fairness & non-inferiority?

In terms of scale, I’m not going to delve into pilot design in this article (we’re already at 10 pages…!) but suffice to say the process should be tested prospectively, with sufficient statistical power to confidently assess that it is (or isn’t) better than human grading – with a critical focussing on edge cases to ensure avoidance of bias. 

3b. Who will own the algorithm and how will it’s use be limited for other means?

A ‘custodian’ of the algorithm would be required to ensure the algorithm’s use was only rarely (if ever) granted to other applications to avoid ‘use creep’. Just as credit scores have been inappropriately used as a proxy for reliability in job application processes, the temptation would come to use the promotions algorithm in other decisions. Imagine if the RAF started paying employees like google does – paying high performers significantly more – there might be some temptation to use the promotions algorithm score of individuals to rank this. The problem is, you’re then using the proxy of promotion suitability as a proxy for pay-rise eligibility. A proxy as a proxy. This would, of course, be inappropriate – imagine the world-class data engineer with no leadership qualities. Top data engineers are often paid well into six-figures in the private sector on account of their incredible importance in enabling data-driven decisions – compromising her pay simply because an algorithm to assess suitability for promotion says so would be a poor decision and likely lead to the loss of her from the military.

I close this section by reasserting that I don’t claim to know whether using ML as part of the RAF promotion board is a good thing – simply that, based on my reading this book, the framework outlined above might be a good way of thinking about implementing it.


Hopefully you’ve enjoyed this longer-than-I-expected canter through the oft-neglected dark side of algorithms. We’ve covered the key features of WMDs – that they cause harm at scale, whilst avoiding transparency – though they’re often designed with the best intentions. Algorithms (particularly Machine Learning ones!) are sexy and seem almost reassuringly mathematical/scientific, and they doubtless will have an increasing role to play in automating boring aspects of our lives as well as creating value in a whole wealth of different ways – my key takeaway is just to think twice about the unintended consequences they can have and ensure to add safeguards against this – hopefully the framework above provides a good starting point for doing so.

China, Silicon Valley and the New World Order

This blog is a book summary/review/some thoughts on the book “AI Superpowers” by Kai-Fu Lee.

Buy it Amazon, or any decent bookstore.

One of the most significant things I’ve read over the last few years is China’s AI strategy, which states it’s aspiration to “be the major artificial intelligence innovation center of the world” by 2030 (translation at [1]). If achieved, the results will be seismic. To get a true sense of why this ground-breaking bit of policy came about, what it means for China and the rest of the world, and what other nations can do to keep-up or simply benefit from this tidal wave of AI investment, I read Kai-Fu’s book.

“major artificial intelligence innovation center of the world

China’s AI strategy end goal, by 2030

I’ll summarise the book below (unless otherwise stated, most thoughts are Kai-Fu’s not mine), broadly based on his chapter order.


AI research has been around since the 50’s – making slow and steady progress, until now – thanks to the combination of two factors: 

  • Deep Learning has supercharged AI’s ability to learn
  • the gigantic computational power nowadays available for low cost (thanks to Moore’s Law)

These two factors means we suddenly are an inflection point where academic achievements can be translated into real-world use-cases, at scale.

These real-world applications are already proving transformational to some industries, leading some (Andrew Ng, a pioneer of deep learning) to compare AI to electricity: A technology applicable to almost all industries, which will revolutionise many of them. Therefore taking a piece of the AI pie will be significant – PricewaterhouseCoopers estimates AI deployment will add $15.7Tn to global GDP by 2030. Later in the summary, we’ll explore why Kai-Fu thinks this is very bad news for global inequality, but first we’ll explore why China is estimated to get about half that sum.

China’s sputnik moment

China’s “sputnik moment” came when AI beat a human in the game Go in March 2016. 

Why did this spur a billion-person country into action? 

To give a sense of the scale of this achievement consider Chess, a game we would consider intellectual. “Go” is similar in the fact it has two players, but is far more complex than Chess, with around 300 times the number of plays. To give a sense of the popularity of the game, consider that 260 million people tuned in to watch the game between AlphaGo and Lee Sedol, so when Lee lost, it sent shockwaves through the population (in fact the broadcast was censored by the state part-way through…). To the UK reader, it would be perhaps like turning on the TV this weekend and watching robots built by a Chinese manufacturer play Liverpool in the FA cup final and utterly take them apart.

A matter of months later, China released new policy on AI, with clear milestones culminating in being “the major artificial intelligence innovation center of the world” by 2030 (translation at [1]), accompanied by massive resource allocation at a national and regional level – subsidies for AI start-ups, generous government contracts to accelerate adoption, founding of incubators and special development funds, and significant government money poured into venture capital (VC) with very favourable (to the private sector) rates of return. 

This sent ripples through the private sector as, that year (2017), Chinese VC investment made up 48% of global AI venture funding, surpassing the US for the first time.

Why did this policy fall on such fertile ground?

This policy came at the right time, Lee observes, noting two fundamental shifts important to AI:

  • The shift from age of discovery to an age of implementation &
  • From the age of expertise to the age of data.

What does he mean by these two assertions?

  1. The age of implementation

The value of single genius in the “Age of Discovery” (when a field is predominantly in the R&D stage) is massive i.e. Fermi was critical in translating nuclear physics to creating the nuclear bomb in the Manhattan project which ended WW2 and established a new nuclear world order.

When thinking about AI, Geoffrey Hinton’s pioneering of Deep Learning might be seen as a (almost) modern-day Fermi-scale effort.

However, Lee argues that now, since powerful deep learning algorithms are freely available in the open-source, and increases in algorithm performance are marginal rather than jaw-dropping, implementation of these algorithms to solve real-world problems is the bigger challenge and the value of a single genius is much less – we’re in the “Age of implementation”.

Implementing AI requires entrepreneurs, and Lee makes the compelling case for China’s entrepreneurs being: (a) abundant and (b) the best-of-the-best at implementation.


Because they are “Gladiators” forged in the “Colosseum”, he states.

He describes Chinese entrepreneurs’ formative days being spent in unimaginably ferocious competition – making copycats of American products with ultra-aggressive pricing, on extreme timescales and with barely any scruples, makes for unbelievably effective entrepreneurs. Their tenacity – working crazy hours (“making silicon valley look sluggish” he describes), rate of iteration to stay ahead of the (fierce) competition, and willingness to “go deep” on a product (that is to get your hands dirty in logistics etc. to make your product more difficult to copy) is unique, and remarkably effective. 

This talent is fundamentally better suited to the age of implementation than the West’s near-monopoly on geniuses – Google have got about half of the world’s top 100 AI researchers/engineers and have an inordinate R&D budget… but these ultra-nerds and massive research funding can hardly be referred to as “gladatorial entrepreneurs”.

  1. The age of data

Aside from having lots of entrepreneurs willing to get their hands dirty and implement AI, China also has an abundance of data – with more internet users than the entirety of the US & Europe combined and a techno-utilitarian culture in which data-sharing is both more acceptable at a policy level (c.f. GDPR) and at an individual level – Chinese citizens are more willing to trade a degree of privacy for convenience. 

The quantity of internet data collected isn’t the only important factor – the quality is higher too, as Chinese citizens use a plethora of apps which translate offline actions (i.e. going to the doctors) into online ones – creating data which is simply unavailable in the west.

Take the Chinese app “WeChat” for example – which is widely referred to as the “digital swiss army knife for modern life”. Before reading this book, I thought it was simply the Chinese version of whatsapp… not so! You can pay for groceries with it, book doctors appointments, file taxes, unlock shared bikes… you name it. The data picture collected on you isn’t simply your online activity (i.e. your search history or likes) – it’s your offline activity (cycling, seeing the doctor etc.) too. This far richer data, which captures both online and offline life (unlike, say facebook, which profiles your life indirectly through what you “like”… hardly great data), allows AI algorithms to understand our lives so much better, and opens up opportunities for other applications of AI.

Lee points out that with the easy access to open-source algorithms, an average datascientist with a big dataset can outperform the world’s best datascientist with an average dataset. The balance has shifted to the east.

Having such a significant quantity and quality of data creates a virtuous cycle. More data creates better algorithms, which make better products, which attracts more users, which gathers more data and so the loop self-perpetuates… 

AI start-ups in China vs US

To illustrate the above theses, let’s give you a few examples contrasting US start-ups to their Chinese equivalents.

Have you heard of the ride-sharing app Uber? Of course you have. What about “Didi”? The Chinese uber-like start-up.

Didi now offers more rides each day in China than Uber does across the globe… and it’s spreading rapidly into different continents.

Buzzfeed – the news platform? Of course, but what about “Toutiao” – it’s Chinese competitor which is now worth ten-times that of buzzfeed, and has 120million daily users who use the site an average of 74 minutes a day?

Toutiao is a great example of the rapid iteration occuring in AI start-ups in China, and their growing mastery of implementing AI – it’s a news site that no longer requires human editors! It’s algorithms trawl the internet to identify pieces of news, tailors recommendations to each of its users, and filters out fake news on the way. It can even write it’s own news – during the 2016 Summer Olympics, Toutiao created an AI reporter that created short summaries of sports events a matter of seconds after the events finished, covering up to 30 events a day.

Our final example, of Airbnb’s Chinese equivalent “Tujia”, illustrates the fact that Chinese AI start-ups are willing to “go deep” to integrate their products into our everyday lives, and create unique and hard-to-imitate products (a critical skill gained from the copycat era of the Chinese economy). Unlike airbnb which is basically a listing website, when you list your home on Tujia, they offer to take on the hard work – they will install smart locks for you, restock supplies, and carry out the between-stays cleaning. The barriers to signing up are miniscule, turbocharging their growth, and crushing the competition.

The resulting dystopia from AI?

Finally in the book, Lee veers away from the China-US compare and contrast, and touches on AGI (artificial general intelligence – when an AI has human level intellect across the board including empathy etc.) and economics.

Kai-Fu doesn’t personally see AGI as the biggest threat in the near term. His feeling is that it is a considerable distance away in terms of being technologically feasible, and points to widespread technological over-optimism in predicting progress, even calling out his own prediction in the 1980’s (as a  world-leading expert on voice recognition at the time), that the software would go mainstream within 5 years… he was twenty years off!

Though he thinks AGI is a while off, he does however think the major problem is that AI will wipe out billions of jobs – his prediction is that 40-50% of US jobs will be technically automatable within the next 10-15yrs. He dispels the common argument that we’ll simply “adapt and find new roles”, just like when modern agriculture or the industrial revolution rolled in, stating that all the stats point to mass unemployment and a “useless class” (as Yuval Harari puts it), who essentially has little/nothing to add to the economy.

He also goes on to paint an even glummer picture, by highlighting that AI/tech favours “winner takes all economics” – around 70% of the gains in the global economy over the next decade (according to PwC consulting) will end up in the hands of a small number of companies in the US & China.

This rising inequality both between countries, and within countries, will have dramatic consequences – both financially, and psychologically, as a large proportion of the world population will lose their sense of purpose, with no meaningful job. Consider that rates of depression triple amongst those unemployed for 6 months.

The jobs left that aren’t in the tech sector will initially be those that require dexterity, though the strawberry picking robots coming online in California and similar will soon put paid to these, and also those that require compassion/human connection.

What’s the problem with this? Well there simply won’t be enough jobs to go around, and they’re not very well paid either. For example “Home healthcare aide” is the fastest-growing profession in America… however it’s one of the lowest paid, with a salary average of $22k/yr.

So what’s the solution

Lee outlines the popular Silicon Valley argument that the super rich corporations will have to be taxed fairly aggressively and this money used for a UBI (universal basic income) to pay this “useless class”.

He points out, however, that he believes this is simply a lazy solution to a complex problem – coming back to the last paragraphs – even if people do have enough cash to live, where will people derive their self-worth from?

Straight up admitting that he was part of the UBI crowd until recently, he spends a whole chapter recounting confronting his own mortality when he was faced with a diagnosis of stage 4 lymphoma. This, he explains, lead him to understand the importance of love, and being loving, in making life worthwhile. 

He then outlines a vision which combines the phenomonal ability of AI to “think” with humans’ unique ability to wrap this analysis with “love” and be compassionate. Explaining that;

  • Perhaps a stipend will be required, funded by tax on rich corporations, but that enhanced stipend payments will be dependent upon people performing voluntary tasks that benefit the social good; care work, community service and education. i.e. being care assistants, teaching (citing that he believed the number of teachers could go up ten-fold). Rewarding socially beneficial activities in the way we currently reward economically productive activities.
  • He expects the landscape of jobs to change – with many jobs powered by AI with a human veneer i.e. the shopping assistant who is aided by a rich AI generated profile of you, who can masterfully up-sell you a special vintage of wine perfectly suited to your wife’s taste for her birthday, or the doctor who benefits from AI’s unique insight into your chances of treatment success for your condition and sensitively discusses the pros and cons of each approach with you – with little time pressure because he doesn’t have to type notes (the AI is taking them) and didn’t need to read back into your history beforehand or analyse a load of test results,. 


It was a hell of book! A rollercoaster, in fact.

I hope you enjoyed my short review/summary, and I encourage you to read the book itself – full of fantastic examples which have certainly shaped how I think about geopolitics and AI.

If you have constructive comments, please leave them below, and I’d be delighted to take the conversation further.



#4 Personal Development

Tips on innovation from a year at the jHub/jHubMed, James Kuht

[jHub/jHubMed are UK Strategic Command’s innovation hubs, situated in London, you can read more about this “secretive UK military lab” here. The views expressed below do not represent the MoD and are the authors own]

“If you keep learning all the time, you have a wonderful advantage”

Charlie Munger, Berkshire Hathaway.

The law of compound interest (that a 1% improvement in your knowledge each day would make you doubly smart within 70 days) makes a compelling case for investing in your personal development.

Let’s take an example.

Jon and Michael are coders. They write software. One might measure their performance in terms of lines of code written.

Let’s say Jon is the more effective coder, and writes 400 lines of code a day, and Michael writes 300 (I know a better coder might write the same software in less lines of code, but just play the game okay 😉 ).

Jon’s a hard worker. He gets in at 8 every morning, works solidly until 5, then goes home. By the end of the year (220 working days, lets say), he’s written 88,000 lines of code. A good employee, by common measure.

Michael is a less experienced, but motivated to get better. His mentor advises him that he should spend two hours a day on personal development, upskilling in coding, being able to write code quicker.

Immediately this cuts his capacity by 22% since he’s working only 7 hours a day. That means he writes 233 lines of code per day, just half Jon’s productivity.

Lets’ presume though, that each day he spends working on his coding, he gets 0.5% more efficient at writing code.

How many lines of code would he write per day by the end of the year?


He’s almost doubly effective as Jon, despite working less and starting half as effective. All this despite the fact some might, short-sightedly, say that he was putting off work to selfishly invest in his own skills.

I hope this offers a compelling vision for why you should invest in your own personal development. Now let’s delve into the how and the what.

Prioritising your personal development

There are two points to make here

(1) how do you prioritise time for your personal development in a busy schedule, and

(2) how do you prioritise what content to fill your personal development time with.

In relation to prioritising time, I refer back to the first entry in this series on Ideation – “you become your diary” – block the time out in your diary. A time where there is no distraction and your thinking is at it’s clearest.

When it comes to prioritising what your personal development goals should be, I have a couple of tips:

  • “Thinking small is a self-fulfilling prophecy” – pick a relevant and audacious goal and plot the breadcrumb trail to get to it (a similar tactic to the “GSD” blog). This might be – “I’m going to become the most authoritive person in my workplace on AI”. The breadcrumb trail to achieve that might involve
    • taking an online course in AI, to gain some practitioner experience
    • reading an authoritative book on AI in your domain (for me “Deep Medicine”, by Eric Topol), & …
    • capturing my thoughts in a concise blog form, to share with others and expose yourself to their critique.
  • Pick a mentor to guide you, or ideally two very different ones(!), who knows you well – if you’re not sure where you want to aim with your personal development, ask a colleague or your boss for frank feedback on what your weakness is, then refer back to point one to plot the breadcrumb trail to achieving it.

Don’t just read books or articles, digest them

 “Why do you have to fail, to learn?”

Paraphrased from Steve Hansen, All Blacks Coach. [Importantly I should point out that this is not a contradiction to the “fail fast” mentality of innovation. It’s simply stating that personal failure is not the only option for learning – you will fail at some point no doubt, but why not get some things right first time (having learnt from others experiences documented in articles/books) and save your failures for the truly unchartered territory?]

As part of your development plan, you are likely to have chosen some books/articles to read tailored to your development goals. These resources can offer a unique insight into immensely successful people’s wisdom that would only otherwise be accessed through years of experience, expensive/significant failures, or spending hundreds of hours of time with them.

What surprises me most about people and books though, is the lack of “insight-conversion” that tends to occur. People read books, recommend them to others, and go on operating/living their life as before – very few of the highly valuable insights these books offer get converted into action despite hundreds of pages of evidence.


My view on this is that an extra 10% of effort can unlock the key to increasing your “insight-conversion” rate, and accelerate your personal development compared to your peers.

This extra 10% comes in the form of reflecting upon the book after reading it, and writing a concise summary (no greater than 2-pages) of:

  • The key points
  • The key pieces of evidence
  • The key phrases or mantras

This forces you to distil the reading down to a framework which your brain can meaningfully comprehend and work with on a day-to-day basis – a framework which your brain can manipulate, combine with other frameworks already existing in your mind, and apply it to problems you encounter.

Invest disproportionately in your operating system

As you digest these resources, you should consider your balance of investment in specific vs generalist skills. My old boss, H, would often spin a metaphor here and call the latter your “operating system” skills.

He would argue that humans are a bit like smartphones.

There are two things you can do with a smartphone to increase its performance.

  1. Upgrade general capabilities such as the operating system, storage, or memory processor, which improve the performance of it across every domain of it’s function, or…
  2. you can download specific apps which perform a specialised function – often these apps get neglected and uninstalled some time later

H’s point was that this roughly translates to people as follows;

  1. Investing in your own operating system – your communication skills, mental models (more on this below), resilience, focus. Skills that underly every aspect of your career/life.
  2. Investing time in gaining a very specific skill – i.e. a Masters in x – a piece of specialists knowledge which you may rarely use.

His point was that people seem to spend disproportionate amounts of time working towards getting Masters/letters after their names – installing apps – which are rarely used, when they might be better served spending time investing in their operating system – which would have a disproportionate effect of increasing their performance in almost every domain.

Mental models

“developing the habit of mastering the multiple models which underlie reality is the best thing you can do.”

Charlie Munger

One of the operating system skills mentioned above, was “Mental Models”. Charlie Munger is, in the view of many, the master of these mental models – frameworks for thinking that you can apply to a wide variety of problems consistently, to solve them. He’s largely credited with being a driving force behind Berkshire Hathaway’s success (2million% return on investment).

What do I mean by a mental model? Let’s give you a simple example. One of Charlies’ favourites is a model called “inversion” which can be applied to all problems. It simply involves tackling the problem by thinking through how you’d reach the opposite of the solution, then avoiding those tactics.

Let’s say you want to be a better leader, but are young and perhaps it’s your first time leading a team.

Thinking conventionally, we might make a list of all the things which might make a good leader. Given your inexperience many of these qualities might be simply unreachable – “battle-hardened”, “widely respected”, “proven”, “experienced”.

Using inversion, we would instead think about the things that would make a poor leader, then avoid them. For example “dishonest”, “inconsistent”, “unclear”, “immoral”. Avoiding these behaviours is eminently achievable regardless of your experience, and would likely make you a pretty reasonable leader.

He gives a great example in this article where he applies some of his mental models to increasing the value of a company from $2Million to $2Trillion (theoretically).

Hopefully the books you read/podcasts you listen to allow you to pick up some useful mental models/frameworks, with which you can upgrade your operating system to increase your performance.


I hope this blog leaves you considering blocking a couple of hours a day in your diary and benefitting from the compound interest that results from investing in yourself. Audacious development goals, perhaps chosen in consultation with a mentor, should help you take aim and focus your valuable   time. Along the way, don’t forget to truly digest the best of any articles or books you read and invest disproportionately in your operating system. Hopefully this 4-part blog series may have been humble start to this.

Good luck!

Feedback welcomed, as always.

James Kuht

Thanks to Steve Spencer Chapman & Amrit S for reviewing the drafts of this blog and offering helpful feedback

twitter: @KuhtJames || blog site: || email: 

If you’re interested in learning more about innovation in Medicine, we’ve interviewed a number of fascinating guests on the Military Medicine Podcast including; the Director of the ‘Nudge Unit’ Hugo Harper, the British & Irish Lions Doctor James Robson & many more, found on iTunes, Spotify & soundcloud.

jHub/jHubMed Scout, Aug 18 – Feb 20

#3 Leadership

Tips on innovation from a year at the jHub/jHubMed, James Kuht

[jHub/jHubMed are UK Strategic Command’s innovation hubs, situated in London, you can read more about this “secretive UK military lab” here. The views expressed below do not represent the MoD and are the authors own]

In the previous blog on GSD (“Getting Stuff Done”) we talked about the criticality of assembling a small-but-perfectly-formed team – this blog explores how you might lead them effectively.

Now I’m certainly not a born leader, naturally quite introverted in fact, and am certainly not a veteran military leader either – this blog humbly offers my observations of good leadership practice specific to innovation.

Leading innovation projects have unique challenges. The projects are often disruptive and risky. The people you lead are specialists, often senior to you and usually volunteering their time. The pace can be break-neck.

This blog starts by outlining how you can convey your mission with clarity, prioritising your team’s workload and empowering to own their tasks as they set-off to accomplish your mission, staying humble and maintaining a culture of healthy challenge.


“If you can’t explain it simply, you don’t understand it”

Albert Einstein

“Life rewards the specific ask and punishes the vague wish”

Tim Ferriss

Innovation is often vague and difficult to decipher – people argue for hours over it’s definition, innovation jargon is almost impenetrable and it commonly involving technologies that are fledgling or ill-understood by many.

Given this, what struck me when I first met the Head of the jHub was the clarity of his mission for us jHub staff. He had composed a 9-word mission statement: “Capability into the hands of the user at pace” and a 7-item “operating principles” document, outlining how we were to work including unashamedly direct items such as: “Deliver results”, “Do the basics”, and “Obligation to Dissent”.

It was impossible for us not to buy into this mission. There was an almost tribal belonging to a highly-motivated group of exiles doing things differently. The sheer simplicity of the operating principles was baffling to me – so glaringly obvious but so effective – having to be frequently reminded: “just because it’s common sense, doesn’t mean it’s common practice”.

Leadership lesson learned – clarity – clear vision, clear mission, clear principles.

…but this clarity did not come easily for me. I’m a waffler. So what can you do if, like me, you find speaking succinctly and with clarity challenging?

Firstly, it takes effort to distil down complex ideas or new technologies to bite-sized chunks that can be easily understood by many.

Start by taking inspiration from TED talks, where metaphors are often used to explain complex subjects like quantum computing and string theory in talks lasting less than 15 minutes. No offence, but your idea is likely to be simpler than either of these topics, so consider yourself lucky!

Once you’ve seen how the best do it, it’s probably time to get some practice. The next time you do a presentation try to use a powerpoint format such as the pechakucha format – having 20 slides simply with an image on each of them that automatically progress every 20 seconds. This will force you to truly distil your ideas down into succinct and easily understandable narrative, and also force you to memorise a talk – a great skill for improving your connection with an audience when conveying a mission/vision with clarity.

Another tip I have found useful when either debating a point or conveying an idea with clarity is to stick to the rule of 3’s – structuring your argument or vision in three points. It’s a common strategy employed by some of the world’s best debaters.

So, how do you get good at this?

Structure your notes like this, your scripts for speaking, and your elevator pitches. It gradually becomes habitual… my triplet-prosody is now probably fairly annoying for my girlfriend.

For one of my projects, a physiotherapy mobile app, this became:

“using a mobile app to prescribe physiotherapy will (1) empower patients with more ownership of their own healthcare, (2) arm clinicians with data to make better decisions, and through these two effects (3) reduce the overall time spent rehabilitating from a Musculoskeletal injury.”


Try it out. Cut the waffle and loose the hattrick of killer points. People will find you so much easier to follow if they understand your vision.


“Tackle the hardest problem first”

Approach frequently attributed to google

Having clearly articulated the mission to your team, it’s important that you prioritise their tasks optimally.

Unfortunately, this doesn’t always happen. Team leads (including me!) found it difficult to prioritise the boring-but-challenging-tasks, so progress ground to a halt just when momentum had built up. Typically; the necessity to run a commercial competition in line with EU law. Anyone who has run a commercial competition will know that it requires deep thought to get a “Statement of Requirement” right, a lot of paperwork to be completed, and a strict focus on avoiding bias and getting the best possible product at the best value for the taxpayer.

The most effective project leads seem to make a concerted decision to avoid this issue by conquering the commercial competition (or equivalent boring-but-challenging-task) first, prioritising any work related to this. The least effective leads spent time on the interesting and fun things – meeting senior stakeholders, building a prototype, or trying out kit, but simply made no progress on delivering the project.

I have had experiences at both ends of the spectrum with this. I got it right for one project I lead– a Mixed-Reality CBRN (Chemical, Biological, Radiological and Nuclear weapons) Clinical Simulation Trainer. We obsessed over creating an accurate requirement, prioritised the competition over all other work (including one 15-hour day evaluating submissions!) and strictly avoided a biased process. Now 5-months after awarding the contract we have a great product, at a fantastic price, and have proceeded to the next phase of the project seamlessly post-gaining funding. In contrast, another one of the projects I lead has been in commercial competition for 5-months after gaining funding and I’m still involved despite having moved on from the jHubMed. The beers after a successful funding pitch taste so much less sweet when you know you the hard work is about to start. Trust me.

What is the boring-but-critical-task that holds back the progress of your project/team? Is it at the top of your priority list?

Don’t ask people to do it, allow them to own it.

“Be strict about your goals, but flexible about your methods”

Author unknown

Once you’ve prioritised the tasks to achieve, it’s important to delegate them effectively. Empowering those working for you sounds like the obvious tactic in achieving this but personally I’ve found it embarrassingly difficult to do. My key takeaway from this year has been that:

Empowerment isn’t giving someone something to do, it’s giving someone something to own.

One of the team for the Physiotherapy app project that I lead, R, was an academic and used to be a bit of a waffler – like me! R had a lot to offer, but pitching was not his strong suit. When the opportunity came for a 3-minute pitch to a General whilst I would be on holiday, I became rather nervous about whether R would even get past his introduction in the 3 minutes. Nevertheless, there was nothing for it, I was going on holiday so I had to leave it in his hands.

He must have absolutely smashed it, because the General immediately followed up with considerable interest in the project.

Since then R has done numerous pitches and the turnaround in his pitching style is remarkable – he has owned each-and-every occasion. I wish I could take some credit for empowering him with true ownership of the task which catalysed his improvement in public speaking, but actually he can take far more credit for how he changed my approach to empowering those on the team.

Empowerment isn’t giving someone something to do, it’s giving someone something to own.


There’s little point me beating the drum on humility in general, for fear of sounding patronising, so instead below are three (of course, three) tangible examples of humility in action that you might consider.

  1. Humility to ask stupid questions:

Have you ever sat in a talk/meeting where one of your team members makes a technical point which is utterly incomprehensible to you? Next usually comes the uniform response of nods from around the table and swiftly moving onto something everyone understands. “Bullshit baffles brains”.

Now this is dangerous – the devil is in the detail in innovation projects and ignoring a problem tends only to make it grow, ready to leap out and floor your project later when you’ve invested more time and money into it. The most inspiring leaders I’ve witnessed have the courage to ask the question “So if I understand this clearly you mean [x]?” – posing it in a way which exposes that they might be being stupid (in which case – great, the leader was missing something and we can all move on!). More often though, this exposes a key risk to the project that was about to be glossed over, which can then be addressed.

2. Having the humility to know you might be wrong:

When you propose a solution, do you get sick of people picking your idea apart and telling you why it won’t work? Me too! In fact, I find it infuriating! I’ve ashamedly occasionally even found myself asking how on earth the person could be stupid enough to think that! There’s clearly no room for that sort of mentality in leading a team successfully – your team, critics and stakeholder have different and valuable perspectives to learn from. Here’s two action points you might consider:

  1. When someone criticises your project, make sure to note their criticism down and really challenge yourself as to whether their point is valid or not, and what your response to it is. Chances are others will repeat their point in the future, and you should be grateful for the fact they exposed this weak-link in your project before a General did. What a gift.
  2. Secondly, have the humility to acknowledge that they may be right. A staunch critic of your project can quickly become a supporter if they realise that you have set up a rigorous evaluation of your project with metrics that will inarguably prove either one of you right. If then you are right, then great, they are far more likely to become a new supporter.

3. Humility to refer to the team, and not yourself:

Something I got picked up on by a colleague was referring to myself too much. “I did [x]” rather than “the team did [x]”. It might be technically true, but it sounds self-centred. Does it really matter if someone knows that you filled in most of that spreadsheet rather than your colleague? Let others sell your success.

Obligation to dissent/Class 2 arguments

Finally we’ll touch on an important culture to foster within your team – that of an “Obligation to dissent”. A sense of duty within those who form your small team to speak up if they see a flaw in the plan. Innovation projects are high risk with danger around every corner, but most of these dangers can be anticipated and mitigated – only if they are brought up.

Let me share a personal example of my failings on this! When I was a fledgling jHub scout I video-teleconferenced into an OA (preliminary funding) panel. The pitch was presented well, but frankly I could not see any possibility of it delivering until 2023 – the IT infrastructure simply did not exist.

jHub’s remit is to innovate in weeks and months, not (5) years.

I didn’t challenge it in the OA, and it was passed, for a not insignificant cost.

Shortly afterwards, I called a trusted colleague to ask him what he thought about the OA – he shared my views – how on earth was this going to deliver?

We wrote an email there and then, raising these concerns to the boss. It was concise and the arguments well-formed.

Minutes later I received a stern response asking why on earth I hadn’t brought this up in the pitch – that we have a clearly stated “obligation to dissent” and I had directly breached this.

I felt terrible – but it was one of the strongest life lessons I have ever learned! I am far less reserved in my feedback since…

So what happened next? The project has been slow to progress since, and still has not been delivered (~18 months later) – arguably, our lack of dissent has contributed to a significant opportunity cost of a jHub scout, and potentially unnecessary spending of a significant quantity of money.

A caveat I must note here though is that there is a balance to be reached here, and that not all dissent/feedback is useful – picking apart every proposal with aggressive “dissent” can halt progress and damage interpersonal relationships.

So how do you ensure your feedback/dissent is most productive?

Perhaps a good way to approach this is to follow lessons learned from PARC/ARPA documented in Dominic Cummings blogs of engaging only in “Class 2 arguments” – ones in which you can explain the other persons argument to their satisfaction. This leads to a well-informed and mutually respectful exchange of views which, in my observations, can be extremely productive. This environment is, indeed, partly credited with the creating the conditions that enabled the development of the internet… Mutual understanding and mutual respect.


So there we have it. Form a succinct mission and convey it with clarity, then prioritise your team’s task’s and empower them to own them, rather than just complete them. Always act with humility andtry to foster a culture of constructive challenge/obligation to dissent.

I certainly can’t profess haven’t mastered these, but I hope by offering some frameworks by which you might think about them you might avoid making the same mistakes as me.

One final point is to make sure to end you measure your performance at the end of the year by doing a 180-feedback survey on your leadership, sent to all your Project teams… “What you measure you become”.

James Kuht

Thanks to Steve Spencer Chapman & Amrit S for reviewing the drafts of this blog and offering helpful feedback

twitter: @KuhtJames || blog site: || email: 

If you’re interested in learning more about innovation in Medicine, we’ve interviewed a number of fascinating guests on the Military Medicine Podcast including; the Director of the ‘Nudge Unit’ Hugo Harper, the British & Irish Lions Doctor James Robson & many more, found on iTunes, Spotify & soundcloud.

jHub/jHubMed Scout, Aug 18 – Feb 20

#2 Getting Stuff Done

Tips on innovation from a year at the jHub/jHubMed, James Kuht

[jHub/jHubMed are UK Strategic Command’s innovation hubs, situated in London, you can read more about this “secretive UK military lab” here. The views expressed below do not represent the MoD and are the authors own]

“Ideas are easy, execution is everything”

John Doerr, Billionaire Tech Investor & author of “Measure what matters”

 “Getting stuff done”  (GSD) sounds so simple but is a skill held by remarkably few.

I’m not talking about simply being given a task to do and completing it, I’m talking about conceiving a project (perhaps taking the ideas formed from blog #1), and then taking it all the way to its completion. There’s always emails in the way, meetings to go to, people to chat to… and someone/something else to blame.

Below are some tips for GSD. We’ll rattle through how you can plot your route to success, inching your way along this route with the simple to-do list, forming a dream team to help you get there and avoiding distractions. Let’s dive in.

Visualise the end-game, and plot the breadcrumb trail to get there

In chess – the most successful players are the ones who can think the most moves in advance and plot their route to achieving checkmate. Innovation is remarkably similar.

So, presuming you have a project, or a project in mind, humour me with these prompts below (perhaps even scribble down your answers) which should illustrate whether you truly have visualised what “checkmate” looks like for your project, and the moves you need to make to get there.

What would successful delivery of your project ideally look like? When will that be?

Who will be there to deliver it? Are they a part of the project team and do they love it?

What are the major milestones along the way? Who will guide it over each of these milestones?

What are the most time-critical jobs to do over the next month to achieve these milestones?

What are the most time-critical steps to take over the next week?



This sort of future-back thinking seems so obvious, but common sense isn’t necessarily common practice.

In my experience if you don’t do this you’ll simply fill your day with easy things – like answering emails or attending meetings – brainless stuff that doesn’t progress your project. Then, all of a sudden, months have passed and you’ve gotten nowhere. That clear inbox is a hollow prize for your efforts.

Pick 5 things to achieve each day

Holding yourself to account on taking these steps is as simple as keeping a to-do list.

Try starting every day scribbling a 5-point to do list each day – 5 things that will take 30-120 minutes and get you closer to project success – and don’t stop until you’ve finished them. Sometimes it’ll be 3pm (woohoo – gym/run time) sometimes 10pm. Personally, I find this acts as a gratifying yard stick of progress through the day, which allows you to hold yourself to account.

One of the first jobs that springs onto most people’s to-do list is building a team and gaining some supportive stakeholders to help them progress their project. Let’s explore this below.

Minimum-but-sufficient stakeholder engagement

“Want to go fast? Go alone. Want to go far? Go together. Want to go fast and far? Go in a tribe.”

Modified African Proverb [Mr Barney Green, Vascular Surgeon, added the last part]

This may counter-intuitive to many – but in my experience, the speed of execution of a project is often slowed down engaging too many stakeholders too soon.

Without wanting to sound like an aspiring dictator, truly the quickest way to make decisions is in a democracy of one (or very few, at least), and at the start of a project, when there are many decisions to be made of little consequence, this is the best option. “Go fast, go alone”.

The temptation of airing your great idea to important people early is significant (some nice back-patting) but often stifling to your progress, as each stakeholder will want to shape your proposal (if it truly is a good one) and you’ll soon be crippled with engaging a complex stakeholder map to make simple decisions. You’ll also be accountable to more senior (quantity and seniority) figures if it all goes wrong – the weight of expectation isn’t a good thing for a young innovation project.

Here is a suggested framework for stakeholder engagement:

  1. Ideation stage: if you truly are innovative, try to push yourself to dive deep in the ideation stage on your own (see “Blog #1: Ideation” for reference), with input from others only when you have a developed idea you’re ready to share. The better state the idea is in when it comes up to feedback, the better quality of feedback you’ll receive, and the better co-workers/secondees/stakeholders you’ll attract (also if your idea is ill-defined you risk undermining your own credibility).
  2. Early project stage: Once you’re moving towards getting funding, you want to bring on a small-but-perfectly-formed A-team who will take this project through the pilot stage too. “Go fast and far, go in a tribe”. Ideally it will consist of
    1. a doer (someone who is genuinely going to run the pilot and help develop the project)
    1. someone who is suitably important to give the project credibility (i.e. ~OF4).
    1. Ideally (but not always possible) someone who may be involved in delivering the project in the long-term – so that they feel invested from the start, and the handover after your successful board pitch is seamless (don’t underestimate this).

…this, is your tribe.

  • Post-pilot stage: Once you’ve ran a pilot and gained some results, you are at the position where you can be reasonably confident of whether your project is a success or failure. This is the time to get the support/recognition your ego has been yearning for – suddenly quotes from 5 Generals who all think your pilot was great will be useful for the board pitch, and you want to really focus on bringing the whole future delivery team into the love-in (beforehand it would have been a waste of their time). “Go [really] far, go in a [big] team

Getting the best out of your team and your stakeholders

“Life favours the specific ask and punishes the vague wish”

Tim Ferris, best-selling author of “The Four-Hour Work Week”

“A problem well-stated is half-solved.”

Charles Kettering, Head of Research for GM 1920-1945.

Once you have assembled your team and stakeholders, there will be a variety of tasks you need them to complete, approvals they need to sign-off, and quotes they need to provide to GSD.

To do this, one of the key skills you need is to make it easy for them to say “yes” to your requests, by clearly articulating your asks. Here’s an example below.

When I first joined the jHub, I was in the process of launching a podcast (The Military Medicine Podcast). We needed a star guest for the first episode to launch with a bang – only problem was, I was a nobody, so how could we get one?

My boss asked “why don’t you get General Sir Chris Deverell on it” (Commander JFC at the time)?

“How?!” I thought. It genuinely felt like there was more chance of getting Elvis Presley on.

Bemused, he simply told me to email the General a concise and specific ask. It felt like potential career suicide.

 “It all boils down to whether they can answer with the single word ‘yes’” he explained.

The email was crafted as such. A catchy subject line, punchy first sentence, brief body, and the simple ending;

“Please simply reply ‘Yes’ if you are happy to appear on the podcast and we’ll take it from there with your outer office?’

A few hours later. ‘Yes’ appears in my inbox.

A month later we interviewed General Chris, hit Number 6 in the iTunes Chart for Science/Medicine and it is still our most listened episode.

There is slightly more to behavioural science than simply writing good emails, but honestly, it mostly boils down to making it easy for people to say ‘yes’ (or take an equivalent action). If you want to learn more about it (you should), check out “Inside the Nudge Unit” – they offer a 4 step guide to making behaviour change easy, which in brief, is to make an offer/action:

  1. Easy (i.e. frictionless)
  2. Attractive
  3. Social
  4. Timely

We based the entire design of the jHub Coding Scheme on these principles and now, being the biggest Coding/AI upskilling initiative within Whitehall despite not a penny spent on advertising, it’s hard not to agree that this is the secret sauce to making people say ‘yes’ (in this case, to spending 20-100hours of their free time learning to code).

Saying no gives focus – conferences/meetings/presentations/too many projects

[It seems counterintuitive to progress from a section on making it easy for people to say “Yes” to telling you why you should say “No” to requests, but we’re going to do it anyway 😉 ]

So, you’ve plotted the path to rip-roaring success, nailed your daily to-do list, and have assembled a great team to get help you on your way. It’s a great start, but it’s still going to be a bumpy ride, full of all sorts of blind-sidings and time-consuming/boring tasks (i.e. commercial competitions). This stuff is dull, but the hard yards take you the distance – it is very easy to be tempted to “look busy” by filling time with anything but these tasks.

One thing that is painful to do, but free up valuable time is, trying to go light on meetings/conferences (especially the sexy looking ones abroad!). You will have to find your own balance on this one, but mine was made very clear from the outset by my old boss, H, who, when I asked for permission to attend a conference would simply reply “does it directly increase the chance of you successfully delivering your projects?”. If the answer was in any way shaky, it would be a blanket “no”.

Though this sounds limiting, I genuinely believe it to be one of the key ingredients in the secret sauce for “GSD” – going on receive for a conference for a day is easy, taking your project to the next level isn’t – that’s why so few people have delivered hard projects (although many would give you a whole bucketload of excuses…!).


There really is no sexy way to say it, GSD simply relies on having a clear aim and breadcrumb trail to reaching it, creating a dream team to help you on your way, making it easy for people to say “yes” to your requests, and remaining focus on achieving your aim.

Good luck.

James Kuht

Thanks to Steve Spencer Chapman & Amrit S for reviewing the drafts of this blog and offering helpful feedback

twitter: @KuhtJames || blog site: || email: 

If you’re interested in learning more about innovation in Medicine, we’ve interviewed a number of fascinating guests on the Military Medicine Podcast including; the Director of the ‘Nudge Unit’ Hugo Harper, the British & Irish Lions Doctor James Robson & many more, found on iTunes, Spotify & soundcloud.

jHub/jHubMed Scout, Aug 18 – Feb 20

#1 Ideation

Tips on innovation from a year at the jHub/jHubMed, James Kuht

[jHub/jHubMed are UK Strategic Command’s innovation hubs, situated in London, you can read more about this “secretive UK military lab” here. The views expressed below do not represent the MoD and are the authors own]

Introduction to the series

This is the first article in a series of four on my experiences of innovation at the jHub/jHubMed – UK Strategic Command’s innovation hubs. They’re intended for anyone who is interested or actively involved in innovation or entrepreneurship.

I’ve attempted to write them in a logical flow, commencing with (1) ideation – hatching a good idea worth pursing, then (2) how to get stuff done and build on these ideas, (3) next covering leadership, anticipating you might be starting to build a team around your project as it grows in scale and (4) covering personal development – which should be a continuous and valuable by-product of your innovation journey for you and your team, but is sometimes neglected.

The structure of each blog is intended to give three to five actionable points for each topic. I do not claim to be an expert in innovation, nor that these points are exhaustive, but simply that I believe these principles were fundamental to the success of the teams I worked in – delivering three disruptive innovation projects in a large bureaucracy (the military) in 18 months.

Let’s dive in with the first blog, on ideation.

 “The true sign of intelligence is not knowledge but imagination”

Albert Einstein

Just like authors get “writers block”, “Innovators block” – the sheer inability to come up with any decent innovative ideas, is a legitimate condition too.

I had my most severe bout of “Innovators block” when my previous jHub boss gave me a call one Sunday afternoon and said, “James, I want you to focus only on ‘10X’ ideas – your remit is to only go after the biggest and boldest ideas”. [a “10X” idea is one which would fundamentally disrupt an industry, a product 10X better than what is currently available, and might require a ground-breaking approach to a problem. Think Uber, or Amazon, or self-driving cars]

Sure enough, when you start to try to think up “10X ideas”, it’s hard to know where to start! The remit was so vast it felt insurmountable. I had innovators block.

After putting it off for weeks and weeks, my boss suddenly asked – “what’s going on with your 10X ideas? I’d like to see a summary of them next week.”…The answer, was ‘nothing’… cue the mad-panic-all-nighter to try and formulate some.

Below are a few tips I wish future me could have told lazy me at the time, that would have helped me to ideate more effectively, and saved me the stress. Get ready to re-shape your calendar, grasp some formula’s that spit out disruptive ideas, and get ready to execute on them.

Protected time is key

“You are your diary”

Stephen Hart, MD of the NHS Leadership academy [I interviewed Stephen for the Military Medicine Podcast here]

The first point to touch on, is the criticality of protected time for coming up with good ideas – without protecting diary time for ideating – you simply will fill the time with meetings and answering emails.

I aimed for 2 ‘premiere’ hours per day. ‘Premiere’ meaning a time when you’re at your most creative – for me, first thing in the morning.

Without this protected time, none of the tools below are going to work. Do it, block 2 hours a day in your diary for a week or two -and switch your emails off during that time.

Write a future-back narrative

So you’ve protected some time, now where to start?

This tip is unashamedly borrowed from a great book called Leading Transformation. They used this strategy to turn a stagnant US department store (Lowes) into one of the most innovative companies on the planet.

They suggest writing (in science fiction style) a short “future-back narrative”. A sci-fi style vision of what the organisation (or a specific area/problem of it) you’re trying to innovate in might look like in 20 years’ time.

Don’t hold back!

To give you an example, I wrote a short future-back narrative about combat first aid – envisioning a scene in 20 years-time where medics use AI decision aids to guide their life-saving actions on the battlefield, projected in real-time into their vision using Mixed-Reality (MR) glasses.

Then once you’ve envisioned something wild, it’s time to drill down into two things;

  1. Which components of your vision are realistically achievable now – perhaps having been achieved in another industry?
  2. …for those components that aren’t achievable, what is the step that could be taken now to take us a step closer to that future state?

For the example above, to get to AI decision aids (arguably unreachable for the UK Defence Medical Services [DMS] currently), we needed mass upskilling in AI in the DMS to start harnessing the data we collect to train appropriate algorithms. So, I proposed building a coding/AI upskilling scheme. It’s now the biggest across Whitehall. Secondly, to get to a state where medics in the field are wearing MR headsets that allow them to bring up these decision aids when they need them, or perhaps even guide their actions, the first place to start would be developing MR simulation trainers. The use case we felt would have the most impact initially was CBRN, so we developed a CBRN Mixed-Reality simulation trainer which will shortly be the first MR simulation trainer delivered in Defence.

See if you can come up with 10 different ideas for your domain of innovation from this way of thinking – what would a future-back narrative look like for the engineering in 20 years-time (predictive maintenance/AR to assist engineers?), administration/HR (all accessed through a mobile, fully automated?) etc. What are the steps you can take now to take us closer this future vision?

Ok, once you’ve had some fun with that try out the next tool.

Combinatorial thinking

“Why do so many world-changing insights come from people with little or no related experience? Charles Darwin was a geologist when he proposed the theory of evolution. And it was an astronomer who finally explained what happened to the dinosaurs.”

Frans Johansson, The Medici Effect

What are the chances that you or I (a Doctor) would come up with a great new military innovation in the engineering space?

Engineers are smart, possibly smarter than both of us, and have more domain-specific knowledge – they know the problems and conventional solutions to them.

Many would argue that the only way we would plausibly outsmart them is by transplanting a solution from another field (perhaps our own) into theirs.

It’s at this intersection, where excellence from one domain is shifted to solve a problem in an entirely different one, where some of the most exciting innovations/discoveries happen. As an innovator, this is where you can add the most value – you might be bilingual in tech and medicine (or whatever your subject domain), so have an unfair advantage to come up with game-changing innovations over your purely medical colleagues.

A (very) simple example is taken from Atul Gawande’s book, the Checklist Manifesto. It documents the tragic case of young lady undergoing routine surgery who died following failed intubation, in part due to the tunnel vision of the highly-experienced specialists involved and failure to follow procedure under a high-pressure situation. The late ladies’ husband was a pilot and saw parallels between intubating a difficult airway under high-pressure and aspects of flying an aircraft. He noted that a checklist had been introduced to flying long ago and was remarkably effective for reducing critical error rates. Soon after, something similar was introduced for managing difficult airways as direct result of his insight, with a resulting significant decrease in critical airway events.

Aside from this sad example, coming up with some of these ideas can prove to be a fun game.

  1. Make 5-10 post-it notes of problems you’d like to solve (i.e. gate guard for military bases, exiting a station, musculoskeletal injury downgrades, problem drinking in the military, lack of availability of doctor’s appointments).
  2. Then make 5-10 post-it notes of technologies and how they are successfully disrupting other industries (i.e. computer vision enabling facial recognition on a phone, robotic process automation transforming customer services, behavioural economics interventions to increase tax returns, etc).
  3. Then randomly match them up!
  4. It may take some work, but see whether you can solve the problem, using the technology! For example – if you matched up gate guard with computer vision, it’s not hard to envisage replacing the soldier on gate-guard with a camera that recognised your face with computer vision and allowed or denied your entry to the based on whether it recognised you (this is fairly plausible – a KFC in China recently enabled “pay by face” – computer vision algorithms recognised who customers were when they made an order, checked they were alive [i.e. to avoid people holding pictures of others up!] then took the payment from their “Alipay” account (Alibaba provided service) automatically)[Article found here].

Emulating the best

My final tip is to think about some of our biggest problems/spend areas in the area you’re trying to innovate in and imagine what they might look like if they were suddenly contracted out to the biggest and most technologically advanced companies on the planet that work in a connected industry. This is a sure-fire way to come up with innovative ideas that are already proven in the private sector. Here’s a few of my thoughts:

What if Ocado ran military warehouses?

What if Uber ran Military Transport (MT)?

What if Amazon ran military stores?

What if Google created our Military intranet search engine?

How to choose which idea to take forward

By now you’ve hopefully come up with a load of ideas. Perhaps aim for 20. There should be a broad range from ‘absolutely nuts’ to entirely plausible.

Open up a powerpoint/equivalent and chuck each idea onto a slide. Here’s a suggested format:

Text Box: “What if …[short one sentence summary of idea]…
[Picture of the idea in action]
Two sentence value proposition of what this might achieve and perhaps a link to someone who’d done it”

Once you’ve made your slides, print off all 20 – one idea per page.

Now find a big table and invite your bosses (or respected friends) to have a look at each of the ideas, and rank them on the table, in order of which they would like you to develop further.

You may or may not have found something they’re really into – but at very worst it acts as a good range-finder for which way you should be angling your search!


When I did this, my favoured idea didn’t rank near the top. My boss and I argued about it, and I’ll try to help you avoid this situation.

  1. Agree at the start of the process what the output of this will be – is your boss able to chose 1 or 2 you’re going to proceed with, or have you already decided and you’re just interested in his/her opinions?
  2. Appreciate that people have very differing opinions of what constitutes a good idea. Can you imagine when Mark Zuckerberg first pitched facebook? “err, I’ve made a webpage where you can check out the pictures of other people on campus”… I doubt many people thought that was very 10X, but it has changed the landscape of human interaction, makes billions of dollars a year in advertising & has arguably shaped history. My boss and I argued over whether my preferred idea should be taken forward or not. He ended with a phrase I’ll never forget:

“We’ve disagreed candidly, but now we’ve decided, let’s proceed wholeheartedly”. 

Hd jHub

Once decided, you have to focus on execution.

Executing on your idea

“Ideas are easy, execution is everything”

John Doeer, Billionaire tech investor and author of “Measure what matters”

This final point is covered extensively in the next blog “#2 Getting Stuff Done”, but I can’t stress enough that “ideas are easy, execution is everything”.

The slight steer in the context of this blog (which has hopefully lead to some bold and risky ideas) is that if you’re going to execute on something risky, try to run small experiments (as per the lean start-up, a brilliant book) to prove/disprove your major hypotheses early. Don’t be afraid to fail, you’ve got 19 back-up ideas left anyway 😉


So, to conclude – hopefully this blog has given you the confidence to block out a couple of hours a day in your diary for ideation and given you a few tools to systematically develop disruptive ideas. Perhaps it’s also given you a strategy for whittling down your bold ideas – in the next blog, we’ll tackle how to execute on them.

As always, feedback welcomed.

James Kuht

Thanks to Steve Spencer Chapman & Amrit S for reviewing the drafts of this blog and offering helpful feedback

twitter: @KuhtJames || blog site: || email:  

If you’re interested in learning more about innovation in Medicine, we’ve interviewed a number of fascinating guests on the Military Medicine Podcast including; the Director of the ‘Nudge Unit’ Hugo Harper, the British & Irish Lions Doctor James Robson & many more, found on iTunes, Spotify & soundcloud.

jHub/jHubMed Scout, Aug 18 – Feb 20.

Zero to One Book review

Thoughts on “Zero to One”, a book by Peter Thiel

Zero to One is a short book that starts with the phrase “How to build the future”, and is a concise guide to doing just that… as long as your idea of building the future is of founding start-ups which monopolise as-yet unknown or underexploited markets.


Key Messages

The key premise of the book centres around the purported fact that entering a competitive market isn’t a very good idea and that no matter what your proposed USP or incremental gain you can offer, you’re never going to be very successful (read: make very much money). Instead, it proposes that you should aim to monopolise markets, by “going from 0 to 1”, i.e. go from no capability to your capability i.e. find a niche and fill it.

The book goes through some tips on how to create such a monopoly and draws on some experiences from the author (one of the co-founders of Paypal – the first online transfer application, and Palantir, both >$1billion businesses). The key starting point is to start with the question “what valuable company is nobody building?”, and build a start-up around that – a niche that no-one has yet identified. He even gives a basic framework for formulating such ideas (pg103), but don’t expect to come up with an Uber or Facebook eureka moment during the chapter (well, I certainly didn’t!). He also argues that at this starting point, you have to get the foundations right, as things need to done right the first time – and gives a 7 step guide to doing this (pg153).

Once up and running, Thiel starts to talk about the “Power Law”, which becomes another important theme of the book. He casts away the age-old adage of “not putting all of your eggs in one basket”, dismissing our jack-of-all-trade-master-of-none school educations, and states that once we identify our strength (or company, or investment opportunity) it makes sense to invest all of our time/effort/resource into it, to maximise the gain, rather than spread betting.

The latter parts of the book talk about the future and briefly touches on some of the key questions of whether we’ll plateau in our progress, progress indefinitely, be replaced by superhuman AI, or wipe ourselves out by nuclear war/climate change or any other number of eventualities. He also talks about his views on artificial intelligence & improving IT, stating his position in the “will computers/AI substitute for humans or compliment us?” firmly on the complementarity side. He cites his own experiences in the way Paypal monitors for credit card fraud (which, at least in 2014, required human & machine complementarity) as proof for this, but I am left wondering whether he still believes as strongly in this viewpoint 4 years on, when the AI world seems to be taking quantum steps on an almost yearly basis.



As a Doctor, with very little background in Economics, I found the book a pretty accessible and easy read, and it introduced me to quite a few economic/business concepts I was unaware of.



Easy and fairly enjoyable read, introducing some interesting concepts of how to build a monopolising (or dramatically failing) start-up for the non-business(wo)man, and a (possibly slightly out of date) look ahead to the future.

Where to buy? 

Introduction to the me and the site

  1. Who am I?
  2. Where am I now and where am I going?
  3. What is this site for?

1 I’m James, a military doc with a newfound interest in tech, particularly artificial intelligence (AI). I’ve been a doctor for a few years, and over the last couple of years have got involved in some (perhaps what might be considered conventional) medical research.

2 I got interested in the applications of AI following listening to TED talks. I had no experience of programming so, though intrigued, started off slowly by teaching myself initially how to make a basic app (more of that later). With that completed after a few months, I spent a bit of time looking into AI. I was fairly quickly able to re-analyse my data that had taken months of statistical analysis the previous summer in a matter of seconds. To that end, I made it my mission to spread the word within the medical community.

This site is my admission that we need smarter people than myself working on these projects and getting involved in finding the best ways to utilise AI in healthcare. It will have two main purposes;

a) To act as a collaborative development platform for demonstrating proof of concept AI-powered clinical tools

b) As an educational platform – documenting my progress so that anyone who wants to can learn from my mistakes and fast-track their own progress, and others can join the debate/challenge my thinking


I hope it helps. Please get in touch with any feedback.