The Dealertrack team in Dallas was thrilled to host a live streaming of the Google I/O Keynote Sessions open to the public last month, right inside our offices.
The two-day event, which aimed to bring together local developers, software engineers, entrepreneurs and leading technology professionals under the same roof for a live simulcast, drinks and some food-truck meat pies (thanks Great Australian Meat Pie Company!), was attended by 39 Dealertrack employees, and an additional 18 visitors to Galleria North.
Those 57 folks joined an international community of over 2 million via live stream, as Google cast the program online to 460 locations, in 90 countries spread across 6 continents.
“Thank you for making it in,” said Sundar Pichai, Google’s Senior Vice President of Products, opening his keynote speech. He greeted viewers, who were shown on the big screens behind him, in Mexico City, Mexico, Munich, Germany, and a small town outside of Nairobi, Kenya.
“It’s really exciting to be here today,” said Pinchai, as he opened with a discussion about mobile platforms.
“We’ve been talking about the mobile revolution for awhile. But just since last year’s I/O, over 600 million people, for the first time, have adopted a smartphone and are beginning the journey of computing. So it’s incredibly special, the moment that we live in.”
Members of several Dallas Engineering teams recapped of some of Google’s big reveals at the May 28-29th event, and here they weigh in on their own favorite findings.
As Google I/O’s tagline in the opening presentation said: “Here’s to what you build next.”
Shannon Barrett, Software Engineer, on the Keynote address
Google’s keynote was a pretty solid run-through of their latest vision and how that is trickling down to their technical offerings. It is extremely clear that they are pushing to expand horizontally to reach as many people as possible.
From a technical standpoint, the highlights for me involved how they’re using machine learning and deep neural nets to make their products more useful.
This was demonstrated with their rollout of “Now on Tap,” which adds additional context awareness to your phone. If you ask it about a particular musician, it will infer who you are talking about based on the song you are listening to.
This also includes location and contact-awareness resulting in automatic appointment management through text messages, etc.
They were also able to demonstrate how their machine learning and deep neural nets are being used with their driverless cars.
Overall, very interesting work they’re doing.
Aaron Seitz, Quality Assurance Engineer, on Google’s Advanced Technologies Group (ATAP)
- Embedded micro-sized radar into a chip to enable hands free full motion range device interaction and control (Project Soli)
- Designed a brand-new type of conductive thread that’s far stronger, softer, and much more conductive; redesigned how conductive textiles connect to hardware, and then integrated it into existing fabric-maker and clothing-designer supply and production chains (Project Jacquard)
- Expanded their 3D storytelling and moviemaking suite into live-action with dynamically generated surround sound (Google Spotlight Stories)
- Built a security-focused computer into an SD card to enable high-confidence plug-and-play security into any platform: desktop, mobile, Windows, Apple, linux (Vault)
- Shown a live demo of their customizable smartphone, literally assembling, booting, and modifying it live on stage and demoing the software’s ability to pick up the changes and utilize new hardware. (Project Ara)
… And more things, which I’m trying to remember but my brain is still reeling a bit. Just wow.
Gary Bland, Software Engineer on “Polymer and modern web APIs: In production at Google scale”
Basic Summary: Google has made Polymer better and faster for a production-ready Polymer 1.0. (See presentation here.)
What it is: Polymer is a library for building web components that also provides a shim for browsers that lack support.
My take: I’m still not sold that this version of Polymer will be feasible on older browsers, particularly on mobile platforms. Tied with that concern is the lack of backwards compatibility of ES6 with their crazy new operators and whole new design patterns that just doesn’t work natively in those “legacy” browsers. When I say “legacy” here, I mean most modern browsers: See ES6 compatibility table
All that being said, if we could really get web components working, it would revolutionize web development — allowing for much better software engineering practices to be brought to bear. Namely, encapsulation and modularity leading to maintainability, flexibility, and extendability. All those fun -ities and -tions.
Right now, I can see using Polymer in a strict sub-set of web sites/apps for production. Namely limited-access web apps, like tools for a CMS, notification systems, reporting, and probably webviews in Android apps.
P.S. Play around with the presentation on various different browsers and see how it performs. This web app was built out with the latest Polymer.