By Troels Marstrand
April 25, 2025
Way back in 2017 Andrew Ng proposed a “highly imperfect rule of thumb”, that “almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI“.
In AI world 2017 is light years ago, but if we add a bit to the sentence above it is still a good mental model for understanding what AI can do: Any repetitive task a highly trained human can do without extraordinary effort, we probably now or in the near future automate using AI*.
*terms and conditions may apply, does not cover empathetic tasks, advanced reasoning, one-shot learning, perfect recall/precision, and may cause human harm if not applied ethically.
To quantify this statement further here are some reference points for where AI systems shine and where they are mostly useless (today and likely in the near future).
Defect spotting in manufacturing (surface scratches, misalignments, etc.)
Object identification in images (people, animals, vehicles, etc.)
Face recognition and verification
Reading text from images (OCR)
Classifying image content (indoor/outdoor, day/night, etc.)
Detecting emotions from facial expressions
Identifying brand logos or specific objects
Screening medical images for anomalies (X-rays, CT scans)
Translation between languages
Transcribing speech to text
Summarizing text content
Sentiment analysis of text
Categorizing documents by topic
Grammar and spelling correction
Answering factual questions
Language identification
Voice recognition/speaker identification
Music genre classification
Detecting specific sounds (glass breaking, gunshots, etc.)
Speech emotion recognition
Background noise classification
Audio quality assessment
Anomaly detection in time series data
Credit card fraud detection
Customer churn prediction
Basic pattern recognition in structured data
Simple classification tasks (spam/not spam)
Recommendation systems based on past behavior
Predicting equipment failure before visible signs appear (by detecting subtle vibration patterns)
Identifying early-stage diseases from biomarkers that doctors might overlook
Detecting fraudulent financial transactions using patterns across thousands of variables
Identifying authorship based on subtle writing style fingerprints
Predicting consumer behavior shifts before they appear in sales data
Analyzing thousands of hours of surveillance footage in minutes
Processing millions of research papers to find connections humans haven’t made
Reviewing large legal contracts faster than specialized lawyers
Scanning decades of climate data to identify subtle trends
Analyzing satellite imagery over time to detect small environmental changes
Predicting successful movies based on seemingly unrelated script elements
Identifying at-risk students using behavioral patterns invisible to teachers
Determining optimal pricing strategies from hundreds of market variables
Finding unexpected drug interactions across diverse patient populations
Predicting material properties without running physical experiments
Generating novel molecular structures for drug discovery
Creating artwork in specific styles that fools art experts
Composing music that resonates emotionally with listeners
Writing code that outperforms human programmers in specific tasks
Designing optimized physical structures humans wouldn’t conceive
Understanding context across multiple data types (text, image, numerical)
Translating between specialized domain languages (technical, medical, legal)
Identifying personality traits from combinations of behaviors
Detecting subtle audio/visual mismatches in deepfakes
Combining multiple sensory inputs to navigate complex environments
Understanding why you can’t put an elephant in a refrigerator
Temporal reasoning: “I dropped the glass, and it shattered on the floor, Was the glass shattered before it hit the floor?”
Object permanence: “If you put a toy under a box and walk away, is the toy still there?”
Understanding basic physics of everyday objects
*Newer models and the introduction of self-thought is slowly encroaching on this area. Current models have some success in common sense reasoning, but we also experience spectacular failures. Proceed with caution.
Detecting sarcasm or irony consistently
Understanding cultural references without explicit explanation
Following conversations with implied information
Understanding how to pack items efficiently in a confined space
Navigating through cluttered environments without collisions
Predicting how objects will interact when manipulated
Understanding which objects can fit through which openings
*Again multimodal LLMs with image capabilities are starting to have some mimicking of spatial understanding, but there is currently no understanding of perspective and occlusion.
Generalizing knowledge to entirely new contexts
Solving problems using tools in creative ways
Adapting to unexpected changes in task parameters
Handling exceptions to rules without explicit programming
Manipulating unfamiliar objects with appropriate force
Tying knots or handling flexible materials
Working with transparent or reflective objects
Performing fine motor skills in varying conditions
Understanding appropriate behavior in different social contexts
Recognizing when to make exceptions to rules for ethical reasons
Balancing competing values in complex situations
Judging when humor is appropriate vs. inappropriate
Understanding human motivations behind actions
Distinguishing intentional from accidental behaviors
Recognizing deception or manipulation
Inferring long-term goals from short-term actions
This means that despite the major progress we have seen in AI over the last couple of years AI is still mostly a productivity booster. It needs a human in the loop for most things to create a meaningful impact. It needs specific constraints and clear interpretation. It is excellent as a narrow intelligence, it is bad as a strategic thinker.
The most effective implementations combine AI’s computational power with human judgment, creativity, and ethical reasoning. We’re seeing this hybrid approach succeed across industries—from healthcare diagnostics augmented by radiologists to legal document analysis guided by attorneys.
The strategic question isn’t whether AI will replace humans, but how we can design systems that maximize the unique strengths of both human and artificial intelligence.
I am excited to see how we may need to shift Andrew Ng’s statement in 2030, maybe it will sound something like this: “Any task done by AI can match or exceed human capability in execution, but requires human oversight proportional to its complexity and potential impact.”
We work with a select number of leaders who are serious about winning. If that’s you, let’s talk.