The Big Android BBQ (BABBQ) is almost here and Google Developers will be there serving up a healthy portion of best practices for Android development and performance! BABBQ will be held at the Hurst Convention Center in Dallas/Ft.Worth, Texas on October 22-23, 2015.
We also have some great news! If you sign up for the event through August 25th, you will get 25% off when you use the promotional code "ANDROIDDEV25". You can also click here to use the discount.
Now, sit back, and enjoy this video of some Android cowfolk preparing for this year’s BBQ!
The Big Android BBQ is an Android combo meal with a healthy serving of everything ranging from the basics, to advanced technical dives, and best practices for developers smothered in a sweet sauce of a close knit community.
This year, we are packing in an unhealthy amount of Android Performance Patterns, followed up with the latest and greatest techniques and APIs from the Android 6.0 Marshmallow release. It’s all rounded out with code labs to let you get hands-on learning. To super-size your meal, Android Developer instructors from Udacity will be on-site to guide users through the Android Nanodegree. (Kinda like a personal-waiter at an all-you-can-learn buffet).
Also, come watch Colt McAnlis defend his BABBQ “Speechless” Crown against Silicon Valley reigning champ Chet Haase. It'll be a fist fight of humor in the heart of Texas!
You can get your tickets here, and we look forward to seeing you in October!
The Android Wear team is rolling out a new update that includes support for interactive watch faces. Now, you can detect taps on the watch face to provide information quickly, without having to open an app. This gives you new opportunities to make your watch face more engaging and interesting. For example, in this animation for the Pujie Black watch face, you can see that just touching the calendar indicator quickly changes the watch face to show the agenda for the day, making the watch face more helpful and engaging.
Interactive watch face API
The first step in building an interactive watch face is to update your build.gradle to use version 1.3.0 of the Wearable Support library. Then, you enable interactive watch faces in your watch face style using setAcceptsTapEvents(true):
setWatchFaceStyle(new WatchFaceStyle.Builder(mService)
.setAcceptsTapEvents(true)
// other style customizations
.build());
To receive taps, you can override the following method:
@Override
public void onTapCommand(int tapType, int x, int y, long eventTime) { }
You will receive events TAP_TYPE_TOUCH when the user initially taps on the screen, TAP_TYPE_TAP when the user releases their finger, and TAP_TYPE_TOUCH_CANCEL if the user moves their finger while touching the screen. The events will contain (x,y) coordinates of where the touch event occurred. You should note that other interactions such as swipes and long presses are reserved for use by the Android Wear system user interface.
And that’s it! Adding interaction to your existing watch faces is really easy with just a few extra lines of code. We have updated the WatchFace sample to show a complete implementation, and design and development documentation describing the API in detail.
Wi-Fi added to LG G Watch R
This release also brings Wi-Fi support to the LG G Watch R. Wi-Fi support is already available in many Android Wear watches and allows the watch to communicate with the companion phone without requiring a direct Bluetooth connection. So, you can leave your phone at home, and as long as you have Wi-Fi, you can use your watch to receive notifications, send messages, make notes, or ask Google a question. As a developer, you should ensure that you use the Data API to abstract away your communications, so that your application will work on any kind of Android Wear watch, even those without Wi-Fi.
Updates to existing watches
This update to Android Wear will roll out via an over-the-air (OTA) update to all Android Wear watches over the coming weeks. The wearable support library version 1.3 provides the implementation for touch interactions, and is designed to continue working on devices which have not been updated. However, the touch support will only work on updated devices, so you should wait to update your apps on Google Play until the OTA rollout is complete, which we’ll announce on the Android Wear Developers Google+ community. If you want to release immediately but check if touch interactions are available, you can use this code snippet:
PackageInfo packageInfo = PackageManager.getPackageInfo("com.google.android.wearable.app", 0);
if (packageInfo.versionCode > 720000000) {
// Supports taps - cache this result to avoid calling PackageManager again
} else {
// Device does not support taps yet
}
Android Wear developers have created thousands of amazing apps for the platform and we can’t wait to see the interactive watch faces you build. If you’re looking for a little inspiration, or just a cool new watch face, check out the Interactive Watch Faces collection on Google Play.
Whether you like them straight out of the bag, roasted to a golden brown exterior with a molten center, or in fluff form, who doesn’t like marshmallows? We definitely like them! Since the launch of the M Developer Preview at Google I/O in May, we’ve enjoyed all of your participation and feedback. Today with the final Developer Preview update, we're introducing the official Android 6.0 SDK and opening Google Play for publishing your apps that target the new API level 23 in Android Marshmallow.
Get your apps ready for Android Marshmallow
The final Android 6.0 SDK is now available to download via the SDK Manager in Android Studio. With the Android 6.0 SDK you have access to the final Android APIs and the latest build tools so that you can target API 23. Once you have downloaded the Android 6.0 SDK into Android Studio, update your app project compileSdkVersion to 23 and you are ready to test your app with the new platform. You can also update your app to targetSdkVersion to 23 test out API 23 specific features like auto-backup and app permissions.
Along with the Android 6.0 SDK, we also updated the Android Support Library to v23. The new Android Support library makes it easier to integrate many of the new platform APIs, such as permissions and fingerprint support, in a backwards-compatible manner. This release contains a number of new support libraries including: customtabs, percent, recommendation, preference-v7, preference-v14, and preference-leanback-v17.
Check your App Permissions
Along with the new platform features like fingerprint support and Doze power saving mode, Android Marshmallow features a new permissions model that streamlines the app install and update process. To give users this flexibility and to make sure your app behaves as expected when an Android Marshmallow user disables a specific permission, it’s important that you update your app to target API 23, and test the app thoroughly with Android Marshmallow users.
How to Get the Update
The Android emulator system images and developer preview system images have been updated for supported Nexus devices (Nexus 5, Nexus 6, Nexus 9 & Nexus Player) to help with your testing. You can download the device system images from the developer preview site. Also, similar to the previous developer update, supported Nexus devices will receive an Over-the-Air (OTA) update over the next couple days.
Although the Android 6.0 SDK is final, the devices system images are still developer preview versions. The preview images are near final but they are not intended for consumer use. Remember that when Android 6.0 Marshmallow launches to the public later this fall, you'll need to manually re-flash your device to a factory image to continue to receive consumer OTA updates for your Nexus device.
What is New
Compared to the previous developer preview update, you will find this final API update fairly incremental. You can check out all the API differences here, but a few of the changes since the last developer update include:
Android Platform Change:
Final Permissions User Interface — we updated the permissions user interface and enhanced some of the permissions behavior.
API Change:
Updates to the Fingerprint API — which enables better error reporting, better fingerprint enrollment experience, plus enumeration support for greater reliability.
Upload your Android Marshmallow apps to Google Play
Google Play is now ready to accept your API 23 apps via the Google Play Developer Console on all release channels (Alpha, Beta & Production). At the consumer launch this fall, the Google Play store will also be updated so that the app install and update process supports the new permissions model for apps using API 23.
To make sure that your updated app runs well on Android Marshmallow and older versions, we recommend that you use Google Play’s newly improved beta testing feature to get early feedback, then do a staged rollout as you release the new version to all users.
With the release of Google Play services 7.8 we’re excited to announce that we’ve added new Mobile Vision APIs which provides the Barcode Scanner API to read and decode a myriad of different barcode types quickly, easily and locally.
Barcode detection
Classes for detecting and parsing bar codes are available in the com.google.android.gms.vision.barcode namespace. The BarcodeDetector class is the main workhorse -- processing Frame objects to return a SparseArray<Barcode> types.
The Barcode type represents a single recognized barcode and its value. In the case of 1D barcode such as UPC codes, this will simply be the number that is encoded in the barcode. This is available in the rawValue property, with the detected encoding type set in the format field.
For 2D barcodes that contain structured data, such as QR codes, the valueFormat field is set to the detected value type, and the corresponding data field is set. So, for example, if the URL type is detected, the constant URL will be loaded into the valueFormat, and the URL property will contain the desired value. Beyond URLs, there are lots of different data types that the QR code can support -- check them out in the documentation here.
When using the API, you can read barcodes in any orientation. They don’t always need to be straight on, and oriented upwards!
Importantly, all barcode parsing is done locally, making it really fast, and in some cases, such as PDF-417, all the information you need might be contained within the barcode itself, so you don’t need any further lookups.
You can learn more about using the API by checking out the sample on GitHub. This uses the Mobile Vision APIs along with a Camera preview to detect both faces and barcodes in the same image.
Supported Bar Code Types
The API supports both 1D and 2D bar codes, in a number of sub formats.
It’s easy to build applications that use bar code detection using the Barcode Scanner API, and we’ve provided lots of great resources that will allow you to do so. Check them out here:
With the release of Google Play services 7.8, we announced the addition of new Mobile Vision APIs, which includes a new Face API that finds human faces in images and video better and faster than before. This API is also smarter at distinguishing faces at different orientations and with different facial features facial expressions.
Face Detection
Face Detection is a leap forward from the previous Android FaceDetector.Face API. It’s designed to better detect human faces in images and video for easier editing. It’s smart enough to detect faces even at different orientations -- so if your subject’s head is turned sideways, it can detect it. Specific landmarks can also be detected on faces, such as the eyes, the nose, and the edges of the lips.
Important Note
This is not a face recognition API. Instead, the new API simply detects areas in the image or video that are human faces. It also infers from changes in the position frame to frame that faces in consecutive frames of video are the same face. If a face leaves the field of view, and re-enters, it isn’t recognized as a previously detected face.
Detecting a face
When the API detects a human face, it is returned as a Face object. The Face object provides the spatial data for the face so you can, for example, draw bounding rectangles around a face, or, if you use landmarks on the face, you can add features to the face in the correct place, such as giving a person a new hat.
getPosition() - Returns the top left coordinates of the area where a face was detected
getWidth() - Returns the width of the area where a face was detected
getHeight() - Returns the height of the area where a face was detected
getId() - Returns an ID that the system associated with a detected face
Orientation
The Face API is smart enough to detect faces in multiple orientations. As the head is a solid object that is capable of moving and rotating around multiple axes, the view of a face in an image can vary wildly.
Here’s an example of a human face, instantly recognizable to a human, despite being oriented in greatly different ways:
The API is capable of detecting this as a face, even in the circumstances where as much as half of the facial data is missing, and the face is oriented at an angle, such as in the corners of the above image.
Here are the method calls available to a face object:
getEulerY() - Returns the rotation of the face around the vertical axis -- i.e. has the neck turned so that the face is looking left or right [The y degree in the above image]
getEulerZ() - Returns the rotation of the face around the Z azis -- i.e. has the user tilted their neck to cock the head sideways [The r degree in the above image]
Landmarks
A landmark is a point of interest within a face. The API provides a getLandmarks() method which returns a List, where a Landmark object returns the coordinates of the landmark, where a landmark is one of the following: Bottom of mouth, left cheek, left ear, left ear tip, left eye, left mouth, base of nose, right cheek, right ear, right ear tip, right eye or right mouth.
Activity
In addition to detecting the landmark, the API offers the following function calls to allow you to smartly detect various facial states:
getIsLeftEyeOpenProbability() - Returns a value between 0 and 1, giving probability that the left eye is open
getIsRighteyeOpenProbability() - Same but for right eye
getIsSmilingProbability() - Returns a value between 0 and 1 giving a probability that the face is smiling
Thus, for example, you could write an app that only takes a photo when all of the subjects in the image are smiling.
Learn More
It’s easy to build applications that use facial detection using the Face API, and we’ve provided lots of great resources that will allow you to do so. Check them out here:
Posted by Magnus Hyttsten, Developer Advocate, Play services team
Today we’ve finished the roll-out of Google Play services 7.8. In this release, we’ve added two new APIs. The Nearby Messages API allows you to build simple interactions between nearby devices and people, while the Mobile Vision API helps you create apps that make sense of the visual world, using real-time on-device vision technology. We’ve also added optimization and new features to existing APIs. Check out the highlights in the video or read about them below.
Nearby Messages
Nearby Messages introduces a cross-platform API to find and communicate with mobile devices and beacons, based on proximity. Nearby uses a combination of Bluetooth, Wi-Fi, and an ultrasonic audio modem to connect devices. And it works across Android and iOS. For more info on Nearby Messages, check out the documentation and the launch blog post.
Mobile Vision API
We’re happy to announce a new Mobile Vision API. Mobile Vision has two components.
The Face API allows developers to find human faces in images and video. It’s faster, more accurate and provides more information than the Android FaceDetector.Face API. It finds faces in any orientation, allows developers to find landmarks such as the eyes, nose, and mouth, and identifies faces that are smiling and/or have their eyes open. Applications include photography, games, and hands-free user interfaces.
The Barcode API allows apps to recognize barcodes in real-time, on device, in any orientation. It supports a range of barcodes and can detect multiple barcodes at once. For more information, check out the Mobile Vision documentation.
Google Cloud Messaging
And finally, Google Cloud Messaging - Google’s simple and reliable messaging service - has expanded notification to support localization for Android. When composing the notification from the server, set the appropriate body_loc_key, body_loc_args, title_loc_key, and title_loc_args. GCM will handle displaying the notification based on current device locale, which saves you having to figure out which messages to display on which devices! Check out the docs for more info.
And getting ready for the Android M release, we've added high and normal priority to GCM messaging, giving you additional control over message delivery through GCM. Set messages that need immediate users attention to high priority, e.g., chat message alert, incoming voice call alert. And keep the remaining messages at normal priority so that it can be handled in the most battery efficient way without impeding your app performance.
SDK Now Available!
You can get started developing today by downloading the Google Play services SDK from the Android SDK Manager.
To learn more about Google Play services and the APIs available to you through it, visit our documentation on Google Developers.
South Korean Games developers Zabob Studio and Buff Studio are start-ups seeking to become major players in the global mobile games industry.
Established in 2013, Zabob Studio was set up by Kwon Dae-hyeon and his wife in 2013. This couple-run business but they have already published ten games, including hits ‘Zombie Judgement Day’ and ‘Infinity Dungeon.’ So far, the company has generated more than KRW ₩140M (approximately $125,000 USD) in sales revenue, with about 60 percent of the studio’s downloads coming from international markets, such as Taiwan and Brazil.
Elsewhere, Buff Studio was founded in 2014 and right from the start, its first game Buff Knight was an instant hit. It was even featured as the ‘Game of the Week’ on Google Play and was included in “30 Best Games of 2014” lists. A sequel is already in the works showing the potential of the franchise.
In this video, Kwon Dae-hyeon, CEO of Zabob Studio ,and Kim Do-Hyeong, CEO of Buff Studio, talk about how Google Play services and the Google Play Developer Console have helped them maintain a competitive edge, market their games efficiently to global users and grow revenue on the platform.
Android Developer Story: Buff Studio - Reaching global users with Google Play
Android Developer Story: Zabob Studio - Growing revenue with Google Play
We’re pleased to share that Android Developer Stories will now come with translated subtitles on YouTube in popular languages around the world. Find out how to turn on YouTube captions. To read locally translated blog posts, visit the Google developer blog in Korean.
Previewed earlier this summer at Google I/O, Android Studio 1.3 is now available on the stable release channel. We appreciated the early feedback from those developers on our canary and beta channels to help ship a great product.
Android Studio 1.3 is our biggest feature release for the year so far, which includes a new memory profiler, improved testing support, and full editing and debugging support for C++. Let’s take a closer look.
New Features in Android Studio 1.3
Performance & Testing Tools
Android Memory (HPROF) Viewer
Android Studio now allows you to capture and analyze memory snapshots in the native Android HPROF format.
Allocation Tracker
In addition to displaying a table of memory allocations that your app uses, the updated allocation tracker now includes a visual way to view the your app allocations.
APK Tests in Modules
For more flexibility in app testing, you now have the option to place your code tests in a separate module and use the new test plugin (‘com.android.test’) instead of keeping your tests right next to your app code. This feature does require your app project to use the Gradle Plugin 1.3.
Code and SDK Management
App permission annotations
Android Studio now has inline code annotation support to help you manage the new app permissions model in the M release of Android. Learn more about code annotations.
Data Binding Support
New data brinding features allow you to create declarative layouts in order to minimize boilerplate code by binding your application logic into your layouts. Learn more about data binding.
SDK Auto Update & SDK Manager
Managing Android SDK updates is now a part of the Android Studio. By default, Android Studio will now prompt you about new SDK & Tool updates. You can still adjust your preferences with the new & integrated Android SDK Manager.
C++ Support
As a part of the Android 1.3 stable release, we included an Early Access Preview of the C++ editor & debugger support paired with an experimental build plugin. See the Android C++ Preview page for information on how to get started. Support for more complex projects and build configurations is in development, but let us know your feedback.
Time to Update
An important thing to remember is that an update to Android Studio does not require you to change your Android app projects. With updating, you get the latest features but still have control of which build tools and app dependency versions you want to use for your Android app.
For current developers on Android Studio, you can check for updates from the navigation menu. For new users, you can learn more about Android Studio on the product overview page or download the stable version from the Android Studio download site.
We are excited to launch this set of features in Android Studio and we are hard at work developing the next set of tools to make develop Android development easier on Android Studio. As always we welcome feedback on how we can help you. Connect with the Android developer tools team on Google+.
Posted by Chandu Thota, Engineering Director and Matthew Kulick, Product Manager
Just like lighthouses have helped sailors navigate the world for thousands of years, electronic beacons can be used to provide precise location and contextual cues within apps to help you navigate the world. For instance, a beacon can label a bus stop so your phone knows to have your ticket ready, or a museum app can provide background on the exhibit you’re standing in front of. Today, we’re beginning to roll out a new set of features to help developers build apps using this technology. This includes a new open format for Bluetooth low energy (BLE) beacons to communicate with people’s devices, a way for you to add this meaningful data to your apps and to Google services, as well as a way to manage your fleet of beacons efficiently.
Eddystone: an open BLE beacon format
Working closely with partners in the BLE beacon industry, we’ve learned a lot about the needs and the limitations of existing beacon technology. So we set out to build a new class of beacons that addresses real-life use-cases, cross-platform support, and security.
At the core of what it means to be a BLE beacon is the frame format—i.e., a language—that a beacon sends out into the world. Today, we’re expanding the range of use cases for beacon technology by publishing a new and open format for BLE beacons that anyone can use: Eddystone. Eddystone is robust and extensible: It supports multiple frame types for different use cases, and it supports versioning to make introducing new functionality easier. It’s cross-platform, capable of supporting Android, iOS or any platform that supports BLE beacons. And it’s available on GitHub under the open-source Apache v2.0 license, for everyone to use and help improve.
By design, a beacon is meant to be discoverable by any nearby Bluetooth Smart device, via its identifier which is a public signal. At the same time, privacy and security are really important, so we built in a feature called Ephemeral Identifiers (EIDs) which change frequently, and allow only authorized clients to decode them. EIDs will enable you to securely do things like find your luggage once you get off the plane or find your lost keys. We’ll publish the technical specs of this design soon.
Eddystone for developers: Better context for your apps
Eddystone offers two key developer benefits: better semantic context and precise location. To support these, we’re launching two new APIs. The Nearby API for Android and iOS makes it easier for apps to find and communicate with nearby devices and beacons, such as a specific bus stop or a particular art exhibit in a museum, providing better context. And the Proximity Beacon API lets developers associate semantic location (i.e., a place associated with a lat/long) and related data with beacons, stored in the cloud. This API will also be used in existing location APIs, such as the next version of the Places API.
Eddystone for beacon manufacturers: Single hardware for multiple platforms
Eddystone’s extensible frame formats allow hardware manufacturers to support multiple mobile platforms and application scenarios with a single piece of hardware. An existing BLE beacon can be made Eddystone compliant with a simple firmware update. At the core, we built Eddystone as an open and extensible protocol that’s also interoperable, so we’ll also introduce an Eddystone certification process in the near future by closely working with hardware manufacturing partners. We already have a number of partners that have built Eddystone-compliant beacons.
Eddystone for businesses: Secure and manage your beacon fleet with ease
As businesses move from validating their beacon-assisted apps to deploying beacons at scale in places like stadiums and transit stations, hardware installation and maintenance can be challenging: which beacons are working, broken, missing or displaced? So starting today, beacons that implement Eddystone’s telemetry frame (Eddystone-TLM) in combination with the Proximity Beacon API’s diagnostic endpoint can help deployers monitor their beacons’ battery health and displacement—common logistical challenges with low-cost beacon hardware.
Eddystone for Google products: New, improved user experiences
We’re also starting to improve Google’s own products and services with beacons. Google Maps launched beacon-based transit notifications in Portland earlier this year, to help people get faster access to real-time transit schedules for specific stations. And soon, Google Now will also be able to use this contextual information to help prioritize the most relevant cards, like showing you menu items when you’re inside a restaurant.
We want to make beacons useful even when a mobile app is not available; to that end, the Physical Web project will be using Eddystone beacons that broadcast URLs to help people interact with their surroundings.
Beacons are an important way to deliver better experiences for users of your apps, whether you choose to use Eddystone with your own products and services or as part of a broader Google solution like the Places API or Nearby API. The ecosystem of app developers and beacon manufacturers is important in pushing these technologies forward and the best ideas won’t come from just one company, so we encourage you to get some Eddystone-supported beacons today from our partners and begin building!
Earlier this summer at Google I/O, we launched the M Developer Preview. The developer preview is an early access opportunity to test and optimize your apps for the next release of Android. Today we are releasing an update to the M Developer Preview that includes fixes and updates based on your feedback.
What’s New
The Developer Preview 2 update includes the up to date M release platform code, and near-final APIs for you to validate your app. To provide more testing support, we have refined the Nexus system images and emulator system images with the Android platform updates. In addition to platform updates, the system images also include Google Play services 7.6.
How to Get the Update
If you are already running the M developer preview launched at Google I/O (Build #MPZ44Q) on a supported Nexus device (e.g. Nexus 5, Nexus 6, Nexus 9, or Nexus Player), the update can be delivered to your device via an over-the-air update. We expect all devices currently on the developer preview to receive the update over the next few days. We also posted a new version of the preview system image on the developer preview website. (To view the preview website in a language other than English, select the appropriate language from the language selector at the bottom of the page).
For those developers using the emulator, you can update your M preview system images via the SDK Manager in Android Studio.
What are the Major Changes?
We have addressed many issues brought up during the first phase of the developer preview. Check out the release notes for a detailed list of changes in this update. Some of the highlights to the update include:
Android Platform Changes:
Modifications to platform permissions including external storage, Wi-Fi & Bluetooth location, and changes to contacts/identity permissions. Device connections through the USB port are now set to charge-only mode by default. To access the device, users must explicitly grant permission.
API Changes:
Updated Bluetooth Stylus APIs with updated callback events. View.onContextClickListener and GestureDetector.OnContextClickListener to listen for stylus button presses and to perform secondary actions.
Updated Media API with new callback InputDevice.hasMicrophone() method for determining if a device microphone exists.
Fixes for developer-reported issues:
TextInputLayout doesn't set hint for embedded EditText. (fixed issue)
Camera Permission issue with Legacy Apps (fixed issue)
Next Steps
With the final M release still on schedule for this fall, the platform features and API are near final. However, there is still time to report critical issues as you continue to test and validate your apps on the M Developer Preview. You can also visit our M Developer Preview community to share ideas and information.
Thanks again for your support. We look forward to seeing your apps that are ready to go for the M release this fall.