Chances are you’re reading this on a screen shining light directly into your eyeballs. Whether LED, LCD, or OLED, these are all emissive displays which work by shining light through various filters, and pushing photons directly to your retina. This can be harsh and is different from how we naturally see the world, where light reflects off objects and into our eyes.

According to a recent study by Neilsen, the average American spends 10+ hours a day in front of a screen. The short term effects range from eye fatigue to sleep problems and we’re only beginning to understand the long term effects.As technology continues to work it’s way into every corner of our lives, we will need to find new ways to interface in a more natural way.

Voice is one area that I’m particularly excited about (see blog post for Future of Sound event). However, just as illustrations can supplement speech between two people to convey ideas, the same is true with graphics + voice compute. For example, if I ask Alexa the weather it may be easier to see a simple graphic showing the 5 day forecast, then to hear it aloud. This idea of the power of graphics + voice is why I was so excited when I met the team at Lightform.

The founders are a group of technical PhDs and creative wizards that have created a hardware/software solution for projection mapping. Previously working at companies like Microsoft Research, Disney, Adobe, and IDEO, they come from diverse and highly capable backgrounds. When paired with a projector, lightform allows users to map out a space and overlay light to create amazing results.

I’ll be the first to say, I’m a sucker for a good demo! It not only helps me understand the product, but demonstrates that it works and founders can execute. Go figure, a bunch of ex-Disney guys didn’t disappoint! The most memorable demo came when Brett Jones, CEO of Lightform, asked Alexa to call an Uber.

I do this all the time and my usual MO post-request is to pull out my phone and see where the Uber is. In this case, Lightform produced an unobtrusive UI element on the wall showing the pertinent information for the uber request including eta, map and car information. Brett explained that Alexa communicated the request to Lightform, which was paired to a small laser projector across the room. The software then pulled out the pertinent details and determined the best location, size and color for a UI element to appear in the environment.

Video projectors continue to increase in quality while decreasing in size and price driven largely by advances in mobile phone technology. Brett and I discussed what the home of the future could look like, with very few screens and UI elements appearing on demand using Lightform technology enabling you to easily interact with your home.

I find myself using my Alexa more and more everyday. The notion that Alexa, or any voice compute device, could supplement its output with unobtrusive, dynamic graphics using Lightform is incredibly exciting to me. Sometimes when I meet a founder or demo a product I get the feeling I’m looking at the future. This was certainly one of those times!

(Lightform funding announcement here)

53 views0 comments

There are several themes I’m currently excited about investing in. In an attempt to attract founders working in these areas and garner alternative perspectives, I decided to put together the following post. Any and all feedback is welcome and if you’re early-stage company working in an area outlined below, please get in touch.

  1. Augmented Reality 1.0 — While the notion of a fully capable and ergonomic headset is appealing, I don’t believe it’s happening anytime soon. We’ll get there, eventually, but in the meantime there are interesting opportunities in AR 1.0. For example, a pair of glasses with an integrated speaker, microphone, and bluetooth connectivity combined with a voice assistant can provide all kinds of location based notification capabilities. Amazon, Google and anyone else working on a voice assistant would ultimately benefit from such a device (rumors are Amazon is already working on this) but substantial brand value can be gained today figuring out the hardware, style, battery life, audio and user experience. Building out a user base and loyalty with a product like this would put the right company in a strong position for the future of AR.

  2. Anything but emissive displays — Chances are you’re reading this post on a screen beaming light into your eyeballs. Whether LED, LCD, or OLED, these emissive (or backlit) displays work by shining light through filters, and directly into your retina. This can be harsh and different from how we naturally see the world, where light reflects off objects and into our eyes. The short term effects range from eye fatigue to sleep problems and we’re only beginning to understand the long-term effects. We will need to find new ways to interface with technology. Voice is certainly one alternative as is reflective display technology. Common examples are e-readers where an external light source reflects off the screen and into your eyes, just like print. Refresh rates and color have historically been a problem limiting usage to basic reading, but there are companies like Halion Displays are developing reflective displays capable of rich video and color (I’m an investor). As there is no backlight, reflective displays are also far more energy efficient. The long term vision being that reflective displays with resolution and color that match print and refresh rates that match emissive monitors will provide a more natural and comfortable experience.

  3. Repetition by robotics — In 1589 Queen Elizabeth refused to grant a patent for a mechanical knitting machine out of fear of putting thousands of knitters out of work. The same fear exists today around robotics but has been proven incorrect time and time again. In fact, robots completing repetitive, at times dangerous tasks, helps drive innovation in new areas better fit for the creative, intelligent human mind. One such company working in the space is Dishcraft Robotics, a company building robotic equipment for commercial kitchens enabling more efficient, hygienic and humanizing operations (Also an investor). They’re largely in stealth at the moment but big things are on the horizon for this company. At the moment, I’m particularly interested in companies building robotic solutions in the areas of sewing and environmental scanning.

These three themes are active areas of interest but I’m also excited about many other topics. If you’re building a company around defensible, innovative technology and would like to talk, please drop me email me greg at anorak dot vc.

42 views0 comments

Mental health statistics always boggle my mind. In the US alone, 43M adults experience mental illness in a given year, 60% going untreated*. $193B annually is lost in earnings due to mental health issues and 12% of Americans take antidepressants daily**. All this yet there has not been a major drug breakthrough in the field for nearly 30 years and talk therapy remains inaccessible to most. These numbers represent an enormous opportunity for improvement. As someone who has struggled with social anxiety and depression in the past, my interest in new treatment options is both personal, and as a venture investor.

Today we often think about virtual reality in relation to gaming but usage for mental health pre-date, and some would argue, was the true initial driver of this technology. Some of the earliest pioneering work in virtual reality was focused on PTSD for the military. Palmer Luckey was working in the Mixed Reality Lab at USC on such applications before co-founding Oculus.

Decades of research has found that using VR for exposure therapy results in reduced autonomic arousal, extinction of fear response and reduction in severity of PTSD symptoms***. To take a real world example, agoraphobia (fear of crowds and public places) affects 200,000 people annually in the US. Using VR, a therapist can simulate a variety of public environments for a patient and coach them through the experience by teaching them various coping techniques.

This concept fascinated me. I thought about how people use practice and repetition in sports to improve. Serving 50 times in a row or shooting free throws for hours on end is how athletes improve. With the availability and power of VR, why shouldn’t people with social anxiety be able to repeatedly practice walking into a crowded room with a therapist there to provide guidance and techniques to make the situation more tolerable.

Throwing my VC hat on, I went out looking for a startup working on this problem. It was Matt Huang at Sequoia that first introduced me to the team at Limbix. They were working on this exact problem. To boot, the founders Ben Lewis and Tony Hsieh were highly accomplished entrepreneurs with a proven track record. Their vision for building a company with the potential to help millions was inspiring and after one meeting, I knew they were building something special. It didn’t take long for me to commit to an investment alongside other fantastic investors like Sequoia Capital, Jazz Venture Partners, Pathbreaker Ventures and Presence Capital.

Limbix has been in stealth for the last few months, heads down in their Palo Alto office building their product and team. For the first time last week they came out to the public, featured on the front page of the New York Times Business Section. It’s a great read and I’m happy to now be able to publicly speak about one of my most exciting investments. I believe all of my investments are working to make the world a better place, but Limbix is on a special, and very personal mission to me and I couldn’t be more excited to be involved.

If you want to join a team revolutionizing an industry with the potential to improve millions of lives, Limbix is hiring. They are also looking to work with mental health professionals interested in exploring this platform. Click here to learn more about Limbix and the positions they are currently hiring for.




46 views0 comments