top of page

I’m pleased to announce that I have joined Anorak Ventures as a Partner, working with Managing Partner Greg Castle to invest in and support exceptional founders in emerging technology (more about me here). I’ve described Anorak’s area of specialization as “Computing in the Third Dimension” – in this post I explain what that means, why it’s novel, and how it will impact the future.

Trapped in a box: the two-dimensional computing interface

The history of computing is widely understood as a series of “eras” of increasing power, each with their definitive leaders:

  • The mainframe era, led by IBM

  • The personal computer era, led by Microsoft

  • The Internet era, led by Google

  • The current ubiquitous computing era, led by Apple in devices, Facebook and Google in consumer services, and Amazon in cloud computing

  • The AI era, which is still in its infancy

Each of these eras made computers simultaneously more powerful and less expensive, making computing more accessible. Cheaper silicon birthed the personal computer era, broadband adoption unlocked the Internet era, and the launch of Amazon Web Services in 2006 and the iPhone in 2007 kicked off the ubiquitous computing era. Through these eras, computers have consistently become faster/better/stronger every year: from VisiCalc’s 254-row limit to petabyte-scale data lakes, or from Usenet posts to Skype calls to FaceTime, computers have gained a bigger role in our lives as they have become more powerful and easier to use.

Despite the onward march of technological power, our experiential interfaces with computers have stagnated in a two-dimensional paradigm. The original Apple Macintosh shipped 38 years ago with a mouse, keyboard, monitor, and printer – the same user interface that we use today.

We still work with computers through an interface invented in 1968 and popularized in 1984.

Smartphones introduced the multitouch interface, but still on a two-dimensional screen. Our entire mental model of software revolves around two-dimensional actions like clicking, dragging, and scrolling. Tellingly, the organizing principle of Web design is the “box model,” forcing every element on every website into the confines of a “box.”

But our sensory systems, and our minds that integrate their input, are inherently three-dimensional and spatial. Written text is ~5,000 years old and pictorial art is ~50,000 years old; spatial reasoning is over 50 million years old, and our most highly developed information interface. We can easily walk through a cocktail party and identify the conversations that are interesting to us, or walk through an office and tune into the right conversations to stay informed. Without spatial reasoning – if we simply listened to all of these overlapping conversations in an audio recording – it would sound like an incoherent jumble. Through two pandemic years of sitting on Zoom, staring at each other in little boxes, we’ve each learned for ourselves that two-dimensional computing simply cannot capture or represent the vibrancy of our three-dimensional world.

On two-dimensional computing surfaces, we lose our mental superpowers and our communication superpowers. Our sarcastic remark is misunderstood as sincere; our request for clarification is misunderstood as a passive-aggressive attack. As a result, our physical selves inhabit an entirely different world from our digital selves, and our lives feel strongly bifurcated between “IRL” and “online” interactions.

We want our online interactions to feel "real" -- they can certainly have major consequences in the physical world - but our two-dimensional online interactions rarely have the emotional tenor of our IRL interactions. After two pandemic years limited to primarily online interactions, restaurants, airports, and highways are packed with people seeking the richer texture of the physical world.

The way forward: computing in the third dimension

The good news is we are in the dawn of a major computing transition as important as the advent of the Internet. Computing, having been “trapped” for decades inside the world of structured databases and two-dimensional inputs and output, is stepping out into the physical, three-dimensional, rough-edged world. At Anorak Ventures, we call this trend “Computing in the Third Dimension,” and some of its pillars include:

  • Computers are understanding the physical world with computer vision and artificial intelligence, capturing much deeper insights with far less manual data entry

  • Computers are acting in the physical world with robotics, turning our understanding of the world into tangible outcomes

  • Computers are creating synthetic worlds through virtual reality and augmented reality, creating experiences for users that have all of the vibrancy, communication bandwidth, and emotional timbre of physical-world experiences inside entirely constructed environments

  • Computers are using generative AI to supercharge these synthetic experiences, allowing users to “construct their dreams” with experiences unattainable in the physical world, but sensorily indistinguishable from reality.

In all four of these areas, the common thread is that the interface boundaries between digital and physical experiences are being dissolved, bringing the power of technology into the physical world with unprecedented scale, and bringing the power of the physical world into the technological domain with unprecedented detail and subtlety.

Computer Vision and Artificial Intelligence

Computing has always been a tool for calculation, record-keeping, and analysis, and their correctness has always depended on the correctness of their inputs. In 1864, Charles Babbage, the father of computing, wrote:

“On two occasions I have been asked [by members of Parliament] - ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”

Less eloquently referred to as “garbage in, garbage out,” data entry has long been a critical business function. When data entry was performed by humans, and double- or triple-checked by other humans, only high-business-value data even got digitized in the first place. The IRS digitized its collection operations to catch tax evaders, manufacturers used ERP software to manage their operational planning, and financiers digitized capital markets to gain better visibility and control of their risks and opportunities. But for every occurrence in the world that was digitized, billions or trillions of undigitized interactions went completely unrecorded.

This started to change when computers started doing their own data entry, starting in earnest in the 1970s. Banks started using optical character recognition to automatically record check numbers, and retailers used barcode scanners and later RFID scanners to automate tracking and inventory. These technologies lowered the costs of data acquisition, but only for pre-defined scenarios with standardized data schemas like scanning known and labeled objects at a cash register.

Recent advances in sensing hardware and machine learning have vastly increased the surface area of automatic data capture and analysis. Instead of setting up our world for computers, by adding barcodes and RFID tags to products and placing scanners in employees’ hands, the data acquisition and analysis can run passively without a human in the loop. Target’s security cameras can track a box of diapers from the warehouse to the store to the trunk of your car. The hardware to acquire data, such as cameras and accelerometers, are getting cheaper and more power-efficient, while the machine learning algorithms that analyze this data are getting increasingly powerful and able to extract higher-level insights. This enables new human interfaces like interactive voice and gesture recognition, as well as software that can analyze and react to data without any human interface.

Is this a good thing? Do we need or want to have an analysis of every time we sneezed, every dog that barked at us, or every blade of grass that we walked on? Perhaps not, but Anorak portfolio company SafelyYou is using computer vision to make our world safer for vulnerable populations.

SafelyYou is solving the extremely difficult problem of senior citizens being injured by falls. Falls are the leading cause of death for adults over 65, and even in nursing homes, where assistance is available, falls often go unnoticed because a resident cannot call for help after they have been injured by a fall. SafelyYou monitors a camera installed in the senior’s room, and can detect when they have fallen and immediately summon help. Not only can SafelyYou alert caregivers to a fall, but it can prevent falls – video review showed that one particular resident had fallen three times by sitting on the edge of her bed while watching TV, and simply putting her TV in front of the chair stopped the problem entirely.

It would have been prohibitively expensive, and intolerably intrusive, for a senior to be monitored in their room 24 hours a day by a human being. Computer vision and artificial intelligence is turning the entire physical world into an input surface, allowing vastly more information about the world to be ingested, processed, and acted upon.


Tightly coupled with computer vision/artificial intelligence is robotics. CV/AI is a big step forward in understanding the world; robotics helps us turn that understanding into action.

Robotics is certainly not new – low-intelligence robots have been used for over 50 years in automotive factories to perform spot welding and to move heavy objects into place. Robots have even used computer vision for decades, such as in agricultural sorting to separate out unripe fruit. However, these robots were purpose-built for a single task, and often needed no intelligence or sensing feedback of any kind.

Today’s robots are vastly more versatile than first-generation robotics due to two major trends: the sensing hardware/machine learning trend described earlier, and the increasing power/decreasing cost of actuators: brushless motors, motor controllers, accelerometers, lithium ion batteries, and the inner-loop control software. Today’s robotics do not just mechanically perform an operation again and again - they can sense their environment and choose the right course of action situationally. The most well-known application of this is self-driving cars, but one of the most interesting applications to us is using robotics to conserve valuable natural resources.

Anorak Ventures’ portfolio company Irrigreen has developed a robotic irrigation head that uses hardware and firmware similar to those you would find in an inkjet printer to “print” a precise pattern of water onto the surface of a lawn. Over 30% of America’s municipal water goes towards watering lawns, and close to half of this water is wasted by traditional “plastic stick” lawn sprinklers that can only water in circles and thus have to be wastefully overlapped.

Irrigreen’s robotic lawn sprinkler eliminates waste and overlap by a tight orchestration of software and hardware. After the user configures the shape of their lawn on their smartphone app, the Irrigreen system uses rain forecasts and soil moisture readings to water the lawn precisely as much as needed, adjusting the angle of the head and the water flow rate as the head sweeps out a full circle:

Irrigreen's digital sprinkler head "prints" a precise pattern of water to minimize waste.

Sensing hardware, actuator hardware, controller hardware, embedded software, machine learning, and cloud computing all work together to deliver the experience that the Irrigreen customer sees on their smartphone app. Because of these interlocking pieces, robotics companies like Irrigreen are tremendously complicated to build and operate, but the founding teams who can successfully do so (and it is almost always a team, with diverse skill sets and work experiences) can deliver value that pure software simply cannot.

Robotics is turning the entire physical world into a computing output surface to match the rich input interfaces that computer vision and AI have enabled. In tandem, AI and robotics are allowing computers to, in many cases, even exceed humans in their ability to sense and to act. The AI in your Apple Watch can detect that you’ve fallen down and have an elevated heart rate; a robotic drone can now fly a defibrillator to you and save your life.

Virtual Reality and Augmented Reality

Virtual reality (VR) is a technology that may eventually eclipse the Internet in its impact on societies, economies, and human lives. I’ve written more about my most optimistic hopes for virtual reality and the reasons that I believe it’s poised to massively break into the consumer mainstream.

The long-term goal of VR has always been to convincingly emulate any experience. If a person sees a dog in front of them on their VR headset, and can pet the dog and feel its fur with their haptic glove, and can hear it bark, and can form a friendship with the dog over time… is it functionally any different from a real dog? That’s really a question for the philosophers, such as Robert Nozick and his thought experiments with his Experience Machine.

Philosophy aside, the Experience Machine is already here. Even Meta’s $299 Quest 2 can transport users into virtual worlds by feeding into their three-dimensional spatial faculties rather than as a two-dimensional windowed experience. When I play Beat Saber for even a few minutes, the feeling of being in an infinite space is so strong that I’m surprised (and a little disappointed) when I take off my headset and find myself in an ordinary room. The impact of VR is even stronger in social interactions, where the illusion of presence creates interactions that feel vastly more real than 2D video calls.

Anorak portfolio company Innerworld takes advantage of not only the increased immersion of social VR, but also the added psychological safety of a remote and anonymized connection. Innerworld offers personal coaching through VR using the techniques of cognitive behavioral therapy (CBT), but in a lower-cost, peer-to-peer model available to those who cannot afford a licensed therapist. This model, called Cognitive Behavioral Immersion, is not only more accessible than licensed therapists, but has specific advantages born of the VR delivery model. The sessions are completely anonymous, which could never happen in a physical service model, and this anonymity allows people to openly discuss topics that they find challenging to discuss in person, even with a licensed professional.

VR is Anorak’s first and heaviest focus area: Managing Partner Greg Castle invested in Oculus’ seed round in 2012, and less than two years later, Facebook had acquired the company for $3 billion, making Oculus the first of six unicorns so far in the Anorak portfolio. Oculus created the modern virtual reality renaissance, and we continue to invest heavily in the VR sector (OssoVR, PrismsVR, Rec Room, and many others).

The dawn of generative AI

Rather than a trend already well underway, like AI, robotics, and virtual reality, generative AI is in its absolute infancy, but accelerating explosively. OpenAI’s DALL-E 2 can construct an image from only a text prompt:

DALL-E 2 creation from only the caption: "teddy bears working on new AI research on the moon in the 1980s"
DALL-E 2 creation from only the caption: "teddy bears working on new AI research on the moon in the 1980s"

… while NVIDIA’s Neural Radiance Fields can synthesize a virtual 3D environment from only a few seconds of scanning:

Created by Karen X. Cheng. Link to Tweet

It doesn’t take a large leap of imagination to simply “speak” a virtual world into existence with a short prompt and experience it in VR. People will be able to spend time with their deceased loved ones, live out alternate lives and entire realities, experience historical events as though they were real, and enjoy experiences like space travel that would otherwise would be attainable only to the narrowest elite. Anorak Ventures does not yet have any portfolio companies in generative AI, but we are eager to invest in this sector.

Computing in the Third Dimension and the future of human-computer interaction

After 38 years of the mouse, keyboard, and monitor, computing is finally breaking free of the two-dimensional interface, and the boundaries between the physical and the virtual worlds are rapidly collapsing. In the next five years, we expect to see:

  • Continued improvement in AIs that source proprietary datastreams and derive insights from these datastreams

  • A Cambrian explosion of robotics, both in form factors and applications, to do everything from services to industrial manufacturing to healthcare

  • An increasingly greater amount of our “screen time” dedicated to VR, and VR being the best way to remotely establish the human connection that was so often found lacking in remote work during the COVID-19 pandemic

  • AI-driven flights of fancy that turn our wildest dreams into virtual worlds we can explore and eventually inhabit

I’m extremely excited to join Anorak as Greg’s first partner and look forward to investing in the founders who are building this world. If you are one of these founders, let's get to know each other:

982 views0 comments

Updated: Aug 3, 2022

In the first 10 minutes of this years Facebook Connect conference, CEO Mark Zuckerberg mentioned the word Metaverse 17 times. An hour later he announced that Facebook, one of the largest companies on the planet, was changing its name to Meta. But what does this nebulous term actually mean?

The term originated in Neal Stephenson sci-fi book Snow Crash released in 1992. It refers to a virtual, 3D videogame-like world where people are represented by avatars. Users, be they individuals or corporations, can build destinations like games, music venues, and social clubs along “the Street” which bisects the entire metaverse. In order to do so, planning approval and fees must be paid to a trust that’s responsible for server fees and general upkeep of the metaverse. There are multiple currencies mentioned throughout the book, both fiat and otherwise. In summary, there is no single company that owns the metaverse, nor currency that rules it.

While companies talk about building the metaverse, what 99% are actually building is more akin to a microverse. A microverse has little, if any, interoperability with other microverses. They can monetize through subscription fees and by selling items and powerups. They may or may not have their own currency enabling in game economies, and friendships and social graphs are microverse specific. Think Roblox.

Then there are macroverses, which are essentially a collection of microverses owned by a single entity. Items and access are still sold per microverse but because each is owned by the same entity a single currency may be used. Elements like identity, achievements and social graphs can be shared, although skills are largely microverse specific. Think EA’s Origin or Activision Battlenet.

Lastly, there is the Metaverse, a universal protocol that makes all things within it interoperable. It’s like reality, only digital with the rules existing in software rather than nature. The challenge is that while nature dictates the laws of physics, software is created by people who don’t always agree on laws resulting in things like varied countries and religions. It’s also why we have 8,100 different cryptocurrencies. The challenge is further complicated when people are incentivized to drive value to their particular belief system, which in the case of Web3 is a core principle. Over time universal standards created by centralized authorities are needed which, in the case of cryptocurrency and Web3 is somewhat paradoxical to their decentralized ethos, which is where DAOs can help. But I digress…

The conclusion I’ve come to is that while it’s unlikely in my lifetime that we’ll see the singular metaverse described in Snow Crash, I expect to see more interoperability between micro and macroverses. This will appear small and first, perhaps the ability to read data from a crypto wallet like metamask, but will ultimately become a functionality people come to expect. This is where I think the metaverse opportunity lies. In the small threads that can one day form the rope that pulls the world towards that universal protocol that is the metaverse.

165 views0 comments

The old adage goes, “Give a person a fish, you feed them for a day. Teach a person to fish, you feed them for a lifetime.” But what if you make a living selling fish? Teaching people to catch their own fish might not be in your best interest. Now substitute your average fishmonger for a trillion-dollar public company, and substitute fish for iPhones. What you have is the crux of the right to repair electronics movement.

Consumer electronic companies like Apple would rather sell you a new product than provide information on how to repair the one you already own. While they offer a Genius Bar for repairs, these can be costly and visits are time-consuming. Beyond consumer electronics, farmers recently found they could no longer complete simple repairs on the John Deere tractors they own without going to John Deere. Arguments from manufacturers range from trying to ensure customer safety, to protecting IP, but the cynic in me says they just like to sell fish.

In the automotive industry, a law was passed in 2012 that required car manufacturers to provide information to owners and service centers so repairs can be conducted. While Tesla has pushed back on this, the laws are generally respected. Inspired by these laws, The Repair Association was formed in 2013 to push the adoption of the same laws for the consumer electronics industry. Adoption in this industry is slow with Apple showing little interest in the topic.

The byproduct of these difficult to repair electronics is 50 million tons, or 13lbs per human on the planet, of e-waste every year. Only 17% of e-waste gets recycled. Toxic materials in e-waste like mercury, cadmium, and lead end up poisoning groundwater when not disposed of properly. Great video here summarizing the e-waste problem.

Our latest investment in Framework is setting out to fix this problem. From their website…

Consumer electronics is broken. We’ve all had the experience of a busted screen, button, or connector that can’t be fixed, battery life degrading without a path for replacement, or being unable to add more storage when drives are full. Individually, this is irritating and requires us to make unnecessary and expensive purchases of new products to get around what should be easy problems to solve. We need to improve recyclability, but the biggest impact we can make is generating less waste to begin with by making our products last longer.

The conventional wisdom in the industry is that making products repairable makes them thicker, heavier, uglier, less robust, and more expensive. We’re here to prove that wrong and fix consumer electronics, one category at a time. Our philosophy is that by making well-considered design tradeoffs and trusting customers and repair shops with the access and information they need, we can make fantastic devices that are still easy to repair. Even better, what we’ve done to enable repair also opens up upgradeability and customization. This lets you get exactly the product you need and extends usable lifetime too.

We know these are big claims and consumer electronics is littered with the graves of companies with grand ideas and failed executions. The proof is going to be in the products. We’re excited about the team of fantastic engineers and designers we’ve pulled together who are carrying hard learned lessons from what we’ve built before, and we’re grateful for the capable and competent partners we’re working with who believe in our mission. We are looking forward to showing you the Framework Laptop and showing the industry and the world a framework for a better way.

Framework’s initial product is a laptop but they will be expanding into all areas of consumer electronics using the same strategic framework in the future. Tested did a fantastic video review with CEO and Founder Nirav Patel here. It’s a huge swing but how often do you have the opportunity to both save the planet and invest in a company with the potential to be the next Apple?

311 views0 comments
bottom of page