Today we are in the early stages of a transition in how we deploy, manage, and use computer systems. Like earlier transitions, such as the emergence of cloud and mobile computing, it’s first visible to those who are paying close attention or are working on it themselves.

We are witnessing the very beginning of a shift toward edge computing, a neologism that simply means the storage and processing of information close to where it’s used and on machines that belong to the end users.

Edge computing isn’t new. It was the dominant paradigm from the late 1970s until the mid 2000s. Back then it was called personal computing. A computer was something you kept under your desk or perhaps in your backpack, and it was where you stored all your data and ran your software.

Beginning in the late 1990s and reaching a tipping point around 2010, the PC era was brought to a close by the advent of ubiquitous Internet connections that made accessing larger remote systems fast and easy. The industry attached the term “cloud” to these invisible remote systems and the age of cloud computing was born.

The move toward cloud computing was driven by many advantages. Software in the cloud could run all the time and be accessed from anywhere. Invisible employees of cloud software and hosting companies magically managed IT resources and handled software updates behind the scenes. Huge amounts of data could be leveraged, and economies of scale allowed short bursts of massive processing power to take the place of long hours spent waiting for a personal computer to crunch data. Multiple users could collaborate on documents and data sets from anywhere.

The simultaneous emergence of mobile computing closed the cloud computing deal. Mobile phones and tablets are extremely portable and convenient but their power is limited by size and battery life. Using these small devices as “thin clients” to access the cloud proved a logical way to expand their power without increasing their cost, and storing data elsewhere is particularly attractive when the local machine is something that can be stepped on or dropped in a toilet.

Cloud computing looked new to many, but was really the return of something old and familiar. Prior to the PC era, a computer was a huge expensive thing called a mainframe that was the size of a room at a minimum. Mainframes were managed by teams of technicians and accessed by way of desktop terminals that didn’t do much more than take input from a keyboard and show output on a screen.

Just replace mainframe with cloud, and terminal with mobile device. The cloud era, it turns out, is simply “mainframe computing 2.0.”

The history of computing is full of cycles. Operating systems, languages, and applications start simple, become complex, and then are rewritten to make them simple again. Computing functionality is moved from general purpose CPUs to specialized hardware and then back again. Platforms and software stacks oscillate between closed and open, bundled and unbundled, monolithic and modular.

To a casual observer it looks like an exercise in futility, but each iteration is a learning experience. When software goes from simple to complex to simple again, that newfound simplicity incorporates a new level of understanding of the problem domain. When platforms and software stacks bundle and unbundle, each turn of the wheel brings new capabilities arranged in new and usually more useful ways.

We think the move to edge is another one of those cycles. This new shift is being driven by many factors. Broadband Internet advances like the increasing deployment of fiber optics, 5G, and low Earth orbit satellites will conspire to close the network performance and reliability gap between the edge and the cloud. Dozens of CPU cores and terabytes of fast local storage are becoming easily affordable. Advances in distributed software engineering techniques like automated consensus and orchestration, continuous integration, intelligent peer to peer data replication, block chains, and network virtualization are making it possible to realize cloud-like automation and fault tolerance everywhere. Meanwhile concerns about privacy, security, cost, and data sovereignty are driving more and more users to reconsider the wisdom of storing and processing their most sensitive information in systems they do not control.

Here at Anorak Ventures we pride ourselves in investing in those rare and transformational ventures that will help to build the world of tomorrow. We’ve been watching this shift and looking for those companies that are best positioned to play a pivotal role in facilitating it.

The cloud may be conceptually similar to the mainframe but it looks nothing like it, and its capabilities are vastly greater. We likewise do not believe this new era of edge computing is going to look or act like the PCs of old. It’s not going to take the form of beige boxes under desks, but of networks of directly connected personal computing devices. Systems located at remote cloud data centers won’t go away, but their role will be as participants offering specialized services rather than as the central point where everything occurs. The new era of edge computing will be the cloud, but everywhere and owned by everyone leveraging open-source protocols.

Old networking technologies like physical routers, firewalls, and VPN boxes are neither flexible nor ubiquitous enough to power this new world. We need new networking technologies that make it easy to connect anything to everything securely and with the performance and scalability only peer-to-peer networking can deliver. That’s why we’re announcing our investment in ZeroTier, a company founded to build the network layer for the second age of personal computing.

ZeroTier’s mission is to “directly connect the world’s devices” and in so doing to functionally erase the distinction between the cloud and the edge. ZeroTier’s software makes it possible to easily manage networks as if the entire planet is one cloud “region.” PCs, phones, tablets, special purpose IOT devices, cloud virtual machines, and even discrete applications can now be joined directly as if they’re all plugged into the same Ethernet switch, even if they’re mobile or scattered across the world. ZeroTier’s open -source protocol has been around since 2011 and seen strong adoption in recent years. Learn more at Zerotier.com and get in touch if you’re interested in getting involved. They’re hiring.

8 views0 comments

Last week Anorak held it’s second annual partner meeting. The goal is to bring together our LPs and portfolio companies to learn, connect and support each other. I’m grateful for all those who took time out of their busy schedule to attend and feel inspired by the support shown both during and after the event. What follows are the slides I presented providing a macro perspective on venture and Anorak, along with some color commentary.

Special thanks to these guys for sponsoring both the event and reception!


Learn about our companies, Connect with each other to explore mutual interests and opportunities, Support our companies through advice, contacts, and resources


Special thanks to Ryan Petersen, Flexport CEO who gave an inspired Fireside Chat


Value outpacing deals due in large part to later stage mega-rounds


Previous point visualized


And again…


Since the beginning of the year there has been a material impact to Asia. Previously there was a lot of media coverage on the current administration’s issues, but it only recently started impacting the numbers. Significant decrease in new funds and reductions in startup valuations have delayed capital deployment in the region


Majority of Fortune 500 companies have a CVC arm. Companies are using CVC as extension to business development efforts


Increasing number of funding options available


🤦 What previously was seed, is now pre-seed


Q1 2019 is largest in history. If this continues we’re sure to have the biggest year ever for US exits


Steady pace of deployment for Anorak, about 1.25 deals per month


Investment size adjusted average is $12.5M post money for Anorak


Last year VR/AR accounted for 42% of the portfolio, we’ve diversified and it now comprises 31% overall


One year ago, the Bay Area was home to 58% of Anorak companies. As we’re finding more interesting and cash efficient startups outside the Bay Area that number has dropped to 50%


13 views0 comments

Recording and transference of information has historically been a 2D exercise. From cave walls and parchment, to paper and silicon, the X,Y axis has been the primary plane. But advancements in computer vision, driven in part by robotics, XR and autonomous vehicles have resulted in massive amounts of 3D data and require new infrastructure. At Anorak Ventures, we are actively investing in startups addressing this need.

In 1992, a 3D game engine helped the masses appreciate the Z axis for the first time. Wolfenstein 3D was released by id Software and provided a new level of immersion, changing the genre of gaming forever. Creating these experiences were initially left largely to professionals. Unity came around in 2006 and made it easy(er) to be a creator. They provided a familiar UX/UI and create once/deploy anywhere functionality, that resulted in a vibrant user community.

The trend of more accessible tools continues, as does the amount of 3D data we are collecting. Satellites and planes are collecting geo-spatial data from the skies, and autonomous vehicles, from the streets. Companies are building 3D replicas of their products using laser and photogrammetry capture technology to provide new ecommerce experiences. Recent releases of ARKit and ARCore are making it easier then ever to leverage these assets.

Further pushing this trend is the growth of virtual and augmented reality where 3D is a necessity. As we increasingly spend time in the digital world, our digital “stuff” will rival the importance of our physical “stuff”. Non fungible tokens are providing authenticity verification and scarcity, making ownership and value more real. Amazon is the destination for physical goods, but the destination for digital goods is yet to be determined.

I first learned about Sketchfab in 2012 during my time at Autodesk. I was intrigued by 3D marketplaces and spent countless hours doing build/buy/partner analysis on the space. The thesis was that while there was a group of people who love building 3D models from scratch, using tools like Maya and 3DSMax, most people have neither the time nor the skill to do so. Many would rather modify and customize existing items in order to accomplish a task and flex their creative muscle. Enter Sketchfab! Taken directly from their website…

“We started Sketchfab in Paris, France, in early 2012. We were frustrated to see so many creators spending hours on making great 3D models, but ending up sharing boring screenshots as there was no better solution to showcase their work… Our community quickly grew to a mix of artists, designers, architects, hobbyists, engineers, brands, museums, game studios, schools and more. Today, with easier creation tools such as Minecraft or Tilt Brush, and 3D capture coming to our smartphones, everyone is becoming a 3D creator. Our goal is to turn 3D into a mainstream media format.”

Sketchfab now has more then 3,000,000 models on their platform and represents the largest single repository of 3D content on the web. The rate at which users are uploading content is also increasing at a steady clip. Creating an account and uploading models is free and they’ve never allowed advertising on their site. This has resulted in a strong brand, substantial good will, and fostered an active and passionate community now numbering around 2,000,000 people.

Sketchfab was not actively raising at the time I invested, but once Sketchfab’s CEO, Alban Denoyel had enough of my harassing emails, he shared some information with me. The numbers immediately reinforced my thesis on the continual acceleration of 3D content creation. The team works hard to maintain direct export capabilities from every major 3D content creation pipeline, 74 to be exact. This includes CAD software like Revit and Solidworks, game engines like Unity and Minecraft, 3D scanning software like Cappasity and RealSense, as well as modeling applications like Max, Maya, Blender, C4D, Houdini, and Modo.

The company has also made it easy for other platforms to leverage their 3D library with their recently announced SDK. Major players like Apple and Facebook have recently announced partnerships with Sketchfab to enable users to pull items directly from their library into native experiences. Sketchfab is providing the pipes that enable the fluid movement of 3D objects around the web. They’re also working with an increasing number of enterprise clients to provide a robust 3D backend and hands on support.

Just as Youtube made it easy and convenient for creators to upload videos and link to them, Sketchfab is positioning themselves similarly for 3D content and I believe the opportunity is just as big. If this post resonates, and you’re looking for new opportunities, Sketchfab is hiring and would love to hear from you.

 

More where this came from

This story is published in Noteworthy, where thousands come every day to learn about the people & ideas shaping the products we love.

Follow our publication to see more product & design stories featured by the Journal team.

8 views0 comments