You know, it’s been a while since I’ve written up some some clever Google Glass hack simply because it was awesome. Lets fix that.
Looking to test the concept of using Glass as a second screen, Android developer Mike DiGiovanni has managed to capture Grand Theft Auto’s oh-so-crucial in-game GPS interface, beaming it to the player’s eyepiece in real time.
Now, if you’ve spent every free minute since GTA V’s release blasting around Los Santos, one caveat: Mike had to go back a few generations to make this work. It requires GTA to be running on a computer, which, as many a scorned PC gamer could tell you, means Grand Theft Auto V is out. GTA 4, meanwhile, didn’t want to boot up on any of Mike’s systems. So this is all built around 2001′s Grand Theft Auto 3.
(A render of what the player sees when using Mike’s setup. Capturing and properly portraying things actually running on Google Glass is really, really tough — hence the lack of video).
So, how does it all work?
While Google has promised to give developers a way to communicate from device-to-device, they haven’t released much on that front yet. So Mike built his own two-part solution; on the PC, you’ve got an app that is capturing the portion of the screen where GTA’s on-screen GPS unit sits and sending it off to your Glass unit. On Glass, you’ve got an application (built on the “plain old Android SDK”, as Mike tells me, since Google has yet to release the official, native Glass SDK) that listens for the GPS visuals to be fired over across the WiFi network and then pushes them to the display.
Is it a bit hacky? Absolutely! But as a proof-of-concept to demonstrate how devices like Glass can be used as a secondary display, it gets the job done, and does a damned good job of conjuring up further concepts. Imagine being able to say “Okay Glass, set waypoint to Ammu-Nation” instead of having to pause the damn game every time. Imagine Metal Gear Solid’s signature video chats floating in front of your eyes as the game carries on. Are developers likely to embrace relatively niche wearables like Glass any time soon? Probably not – but it’s a damned fun thing to think about.
Mike tells me that the whole thing runs at “about 10 frames per second” with minimal delay. “[It's working] to the point where you could look only at Glass and still drive around” he says. “You would likely run into [virtual] pedestrians, but you could definitely drive around the streets perfectly.”
Before you start diggin’ around for a link to download the app for yourself, a heads up: while Mike says he “definitely has plans to release it”, it’s a “very fragile proof of concept” at this point. Amongst other things, the heavy use of WiFi paired with a need for the display to be always on means that the app chews through Google Glass’ battery in about an hour. While he hopes to release it in the future, he’ll likely hold off until Google releases the official Glass SDK and he gets a chance to polish it up accordingly. Mike previously made headlines with the release of winky, a Glass app that lets you snap photos by winking your right eye, and he’s released a number of other apps on his personal site here.
I got a chance to chat with Mike about the project and the technical hurdles involved. Our chat provided some pretty interesting insights into the current state of Glass development (and his plans for this project moving forward), so I’ve pasted the dialog (with Mike’s permission) below:
Can you explain the setup a bit?
Right now there’s no officially supported way for a Glass device to communicate with another device in real time. Older versions of Glass and the My Glass companion app for Android hinted at a way that we could communicate between an app on a phone and Glass, but those have disappeared from recent builds.
This [project] sets up a channel for network communication between an app on Glass and a small piece of software on your PC. Once that’s setup and you start playing GTA, you will see the GPS navigation on the display of Glass. It’s really similar to how you would use the built in GPS navigation of Glass in a real car.
How fast does it update? Is it in realtime?
It currently gets about 10 frames per second, but is effectively realtime and smooth enough to the point where you could look only at Glass and still drive around. You would likely run into pedestrians, but you could definitely drive around the streets perfectly. It’s definitely technologically possibly to improve that to a point where it’s as smooth and fast as watching any video.
What sort of tech did you use here? Which SDK?
I used the plain old Android SDK. The real GDK for Glass development is rumored to be coming out this month, but right now you can dive in with the standard Android SDK. There’s a handful things to watch out for, but after releasing quite a few native Android SDK apps for Glass, I know what to keep in mind. I actually just gave a talk to developers in Toronto at Screens 2013 about how to develop for Glass without owning the Glass hardware, in which a large portion of it spoke about the potential pitfalls of using the standard Android SDK.
How are you grabbing the data and pushing it to Glass?
The tech side of it is really basic. We have a small piece of software running on the PC that’s playing GTA. This software is pretty much pulling a subsection of the screen and streaming it to Glass over your network.
There are a few downsides to this approach, it doesn’t look amazing and it’s prone to network congestion.
If given the chance to turn this into a fully feature product, we would likely make a decision to build up the software that runs on the PC and do image analysis to turn the map data into non-bitmap data. This would likely result in a much crisper image on Glass and give us the ability to customize the map completely. This is pretty much the perfect solution to “hacking” support onto any sort of existing games.
If we had the opportunity to turn this into an officially supported function of a game or other product, it becomes much easier to send perfect data or images to Glass directly from the game.
All in all, the development side of things for this project is pretty mundane. The most exciting part about this has been opening the conversation up to using Glass or even other wearables as a second screen. It’s a use case that hasn’t been talked about much. It could be something as simple as moving your hud or map to a wearable device as is demonstrated with this software. Or it could be similar to how the Wii-U tries to bring forward asymmetric multiplayer gaming.
How long does the Glass battery last, doing something like this?
Not very long at all, less than an hour.
One of the biggest battery killer on Glass is keeping the screen on. Right now, this software keeps the screen on all the time. There’s potential ways to address this, like letting the screen go off after a certain period of time, but there’s a usability problem with that with the current SDK. Whenever the Glass screen goes off, you get kicked out of any app that you were running. This means you would have to run this piece of software on Glass every time the screen turned off, it’s a terrible experience.
There’s a few “hacks” that you could do to get around that, but it’s not likely something that would continue to work in the long run, and I do believe that when the GDK is released, we will have a real solution for this. With that in mind it’s something I decided not to address at this point.
How long did it take to build?
It only took a few short hours to put it together. I worked on it during my train commute to work over the course of a few days. Of course, that’s just a very fragile proof of concept, that I can reliably run on my computer, with my phone, and my Glass setup through my phone acting as a router.
At this point, for anyone else it would probably fall apart. I have a list of things to cleanup to get it working in a more general less controlled environment so others could experience it. I definitely have plans to release it similarly to my other apps. I may hold it off until we get the real GDK, in the hopes that the user experience and battery issues could be addressed
No comments:
Post a Comment