Tag Archives: Access

Choosing the right storage experience

Posted by Yacine Rezgui - Developer Relations Engineer

The upcoming stable release of Android 14 is fast approaching. Now is a great time to test your app with this new release’s changes if you haven’t done so already. With Platform Stability, you can even submit apps targeting SDK 34 to the Google Play Store.

Android 14 introduces a new feature called Selected Photos Access, allowing users to grant apps access to specific images and videos in their library, rather than granting access to all media of a given type. This is a great way for users to feel more comfortable sharing media with apps, and it's also a great way for developers to build apps that respect user privacy.

Image in four panels showing Select Photos Access being used to share media from the user's on-device library
To ease the migration for apps that currently use storage permissions, apps will run in a compatibility mode. In this mode, if a user chooses “Select photos and videos” the permission will appear to be granted, but the app will only be able to access the selected photos. The permission will be revoked when your app process is killed or in the background for a certain time (similar to one time permissions). When the permission is once again requested by your app, users can select a different set of pictures or videos if they wish. Instead of letting the system manage this re-selection, it’s recommended for apps to handle this process to have a better user experience.

Image in four panels showing media reselection being used to update user's choice of which media to be shared

Choosing the right storage experience

Even when your app correctly manages media re-selection, we believe that for the vast majority of apps, the permissionless photo picker that we introduced last year will be the best media selection solution for both user experience and privacy. Most apps allow users to choose media to do tasks such as attaching to an email, changing a profile picture, sharing with friends, and the Android photo picker's familiar UI gives users a consistent, high-quality experience that helps users grant access in confidence, allowing you to focus on the differentiating features of your app. If you absolutely need a more tightly integrated solution, integrating with MediaStore can be considered as an alternative to the photo picker.


Android photo picker

Image of My Profile page on a mobile device

To use the photo picker in your app, you only need to register an activity result:

// Using Jetpack Compose, you should use rememberLauncherForActivityResult instead of registerForActivityResult // Registers a photo picker activity launcher in single-select mode val pickMedia = registerForActivityResult(PickVisualMedia()) { uri -> // Callback is invoked after the user selects a media item or closes the photo picker if (uri != null) { Log.d("PhotoPicker", "Selected URI: $uri") } else { Log.d("PhotoPicker", "No media selected") } }

The photo picker allows customization of media type selection between photos, videos, or a specific mime type when launched:

// Launch the photo picker and let the user choose images and videos. pickMedia.launch(PickVisualMediaRequest(PickVisualMedia.ImageAndVideo)) // Launch the photo picker and let the user choose only images. pickMedia.launch(PickVisualMediaRequest(PickVisualMedia.ImageOnly)) // Launch the photo picker and let the user choose only videos. pickMedia.launch(PickVisualMediaRequest(PickVisualMedia.VideoOnly)) // Launch the photo picker and let the user choose only images/videos of a // specific MIME type, like GIFs. pickMedia.launch(PickVisualMediaRequest(PickVisualMedia.SingleMimeType("image/gif")))

You can set a maximum limit when allowing multiple selections:

// Registers a photo picker activity launcher in multi-select mode. // In this example, the app lets the user select up to 5 media files. val pickMultipleMedia = registerForActivityResult(PickMultipleVisualMedia(5)) { uris -> // Callback is invoked after the user selects media items or closes the // photo picker. if (uris.isNotEmpty()) { Log.d("PhotoPicker", "Number of items selected: ${uris.size}") } else { Log.d("PhotoPicker", "No media selected") } }

Lastly, you can enable the photo picker support on older devices from Android KitKat onwards (API 19+) using Google Play services, by adding this entry to your AndroidManifest.xml file:

<!-- Prompt Google Play services to install the backported photo picker module --> <service android:name="com.google.android.gms.metadata.ModuleDependencies" android:enabled="false" android:exported="false" tools:ignore="MissingClass"> <intent-filter> <action android:name="com.google.android.gms.metadata.MODULE_DEPENDENCIES" /> </intent-filter> <meta-data android:name="photopicker_activity:0:required" android:value="" /> </service>

In less than 20 lines of code you have a well-integrated photo/video picker within your app that doesn’t require any permissions!


Creating your own gallery picker

Creating your own gallery picker requires extensive development and maintenance, and the app needs to request storage permissions to get explicit user consent, which users can deny, or, as of Android 14, limit access to selected media.

First, request the correct storage permissions in the Android manifest depending on the OS version:

<!-- Devices running up to Android 12L --> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" android:maxSdkVersion="32" /> <!-- Devices running Android 13+ --> <uses-permission android:name="android.permission.READ_MEDIA_IMAGES" /> <uses-permission android:name="android.permission.READ_MEDIA_VIDEO" /> <!-- To handle the reselection within the app on Android 14+ (when targeting API 33+) --> <uses-permission android:name="android.permission.READ_MEDIA_VISUAL_USER_SELECTED" />

Then, the app needs to request the correct runtime permissions, also depending on the OS version:

val requestPermissions = registerForActivityResult(RequestMultiplePermissions()) { results -> // Handle permission requests results // See the permission example in the Android platform samples: https://github.com/android/platform-samples } if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.UPSIDE_DOWN_CAKE) { requestPermissions.launch(arrayOf(READ_MEDIA_IMAGES, READ_MEDIA_VIDEO, READ_MEDIA_VISUAL_USER_SELECTED)) } else if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.TIRAMISU) { requestPermissions.launch(arrayOf(READ_MEDIA_IMAGES, READ_MEDIA_VIDEO)) } else { requestPermissions.launch(arrayOf(READ_EXTERNAL_STORAGE)) }

With the Selected Photos Access feature in Android 14, your app should adopt the new READ_MEDIA_VISUAL_USER_SELECTED permission to control media re-selection, and update your app’s UX to let users grant your app access to a different set of images and videos.

When opening the selection dialog, photos and/or videos will be shown depending on the permissions requested: if you're requesting the READ_MEDIA_VIDEO permission without the READ_MEDIA_IMAGES permission, only videos would appear in the UI for users to select files.

// Allowing the user to select only videos requestPermissions.launch(arrayOf(READ_MEDIA_VIDEO, READ_MEDIA_VISUAL_USER_SELECTED))

You can check if your app has full, partial or denied access to the device’s photo library and update your UX accordingly. It's even more important now to request these permissions when the app needs storage access, instead of at startup. Keep in mind that the permission grant can be changed between the onStart and onResume lifecycle callbacks, as the user can change the access in the settings without closing your app.

if ( Build.VERSION.SDK_INT >= Build.VERSION_CODES.TIRAMISU && ( ContextCompat.checkSelfPermission(context, READ_MEDIA_IMAGES) == PERMISSION_GRANTED || ContextCompat.checkSelfPermission(context, READ_MEDIA_VIDEO) == PERMISSION_GRANTED ) ) { // Full access on Android 13+ } else if ( Build.VERSION.SDK_INT >= Build.VERSION_CODES.UPSIDE_DOWN_CAKE && ContextCompat.checkSelfPermission(context, READ_MEDIA_VISUAL_USER_SELECTED) == PERMISSION_GRANTED ) { // Partial access on Android 14+ } else if (ContextCompat.checkSelfPermission(context, READ_EXTERNAL_STORAGE) == PERMISSION_GRANTED) { // Full access up to Android 12 } else { // Access denied }

Once you verified you have access to the right storage permissions, you can interact with MediaStore to query the device library (whether the granted access is partial or full):

data class Media( val uri: Uri, val name: String, val size: Long, val mimeType: String, val dateTaken: Long ) // We run our querying logic in a coroutine outside of the main thread to keep the app responsive. // Keep in mind that this code snippet is querying all the images of the shared storage suspend fun getImages(contentResolver: ContentResolver): List<Media> = withContext(Dispatchers.IO) { val projection = arrayOf( Images.Media._ID, Images.Media.DISPLAY_NAME, Images.Media.SIZE, Images.Media.MIME_TYPE, ) val collectionUri = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q) { // This allows us to query all the device storage volumes instead of the primary only Images.Media.getContentUri(MediaStore.VOLUME_EXTERNAL) } else { Images.Media.EXTERNAL_CONTENT_URI } val images = mutableListOf<Media>() contentResolver.query( collectionUri, projection, null, null, "${Images.Media.DATE_ADDED} DESC" )?.use { cursor -> val idColumn = cursor.getColumnIndexOrThrow(Images.Media._ID) val displayNameColumn = cursor.getColumnIndexOrThrow(Images.Media.DISPLAY_NAME) val sizeColumn = cursor.getColumnIndexOrThrow(Images.Media.SIZE) val mimeTypeColumn = cursor.getColumnIndexOrThrow(Images.Media.MIME_TYPE) while (cursor.moveToNext()) { val uri = ContentUris.withAppendedId(collectionUri, cursor.getLong(idColumn)) val name = cursor.getString(displayNameColumn) val size = cursor.getLong(sizeColumn) val mimeType = cursor.getString(mimeTypeColumn) val dateTaken = cursor.getLong(4) val image = Media(uri, name, size, mimeType, dateTaken) images.add(image) } } return@withContext images }

The code snippet above is simplified to illustrate how to interact with MediaStore. In a proper production app, you should consider using pagination with something like the Paging library to ensure good performance.

You may not need permissions

As of Android 10 (API 29), you no longer need storage permissions to add files to shared storage. This means that you can add images to the gallery, record videos and save them to shared storage, or download PDF invoices without having to request storage permissions. If your app only adds files to shared storage and does not query images or videos, you should stop requesting storage permissions and set a maxSdkVersion of API 28 in your AndroidManifest.xml:

<!-- No permission is needed to add files from Android 10 --> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" android:maxSdkVersion="28" />

ACTION_GET_CONTENT behavior change

In our last storage blog post, we announced that we’ll be rolling out a behavior change whenever ACTION_GET_CONTENT intent is launched with an image and/or video mime type. If you haven’t tested yet this change, you can enable it manually on your device:

adb shell device_config put storage_native_boot take_over_get_content true

That covers how to offer visual media selection in your app with the privacy-preserving changes we've made across multiple Android releases.If you have any feedback or suggestions, submit tickets to our issue tracker.

“L10n” – Localisation: Breaking down language barriers to unleash the benefits of the internet for all Indians

In July, at the Google for India event, we outlined our vision to make the Internet helpful for a billion Indians, and power the growth of India’s digital economy. One critical area that we need to overcome is the challenge of India’s vast linguistic diversity, with dialects changing every hundred kilometres. More often than not, one language doesn’t seamlessly map to another. A word in Bengali roughly translates to a full sentence in Tamil and there are expressions in Urdu which have no adequately evocative equivalent in Hindi. 


This poses a formidable challenge for technology developers, who rely on commonly understood visual and spoken idioms to make tech products work universally. 


We realised early on that there was no way to simplify this challenge - that there wasn’t any one common minimum that could address the needs of every potential user in this country. If we hoped to bring the potential of the internet within reach of every user in India, we had to invest in building products, content and tools in every popularly spoken Indian language. 


India’s digital transformation will be incomplete if English proficiency continues to be the entry barrier for basic and potent uses of the Internet such as buying and selling online, finding jobs, using net banking and digital payments or getting access to information and registering for government schemes.


The work, though underway, is far from done. We are driving a 3-point strategy to truly digitize India:


  1. Invest in ML & AI efforts at Google’s research center in India, to make advances in machine learning and AI models accessible to everyone across the ecosystem.

  2. Partner with innovative local startups who are building solutions to cater to the needs of Indians in local languages

  3. Drastically improve the experience of Google products and services for Indian language users


And so today, we are happy to announce a range of features to help deliver an even richer language experience to millions across India.

Easily toggling between English and Indian language results

Four years ago we made it easier for people in states with a significant Hindi-speaking population to flip between English and Hindi results for a search query, by introducing a simple ‘chip’ or tab they could tap to see results in their preferred language. In fact, since the launch of this Hindi chip and other language features, we have seen more than a 10X increase in Hindi queries in India.

We are now making it easier to toggle Search results between English and four additional Indian languages: Tamil, Telugu, Bangla and Marathi.

People can now tap a chip to see Search results in their local language

Understanding which language content to surface, when

Typing in an Indian language in its native script is typically more difficult, and can often take three times as long, compared to English. As a result, many people search in English even if they really would prefer to see results in a local language they understand.

Search will show relevant results in more Indian languages

Over the next month, Search will start to show relevant content in supported Indian languages where appropriate, even if the local language query is typed in English. This functionality will also better serve bilingual people who are comfortable reading both English and an Indian language. It will roll out in five Indian languages: Hindi, Bangla, Marathi, Tamil, and Telugu.

Enabling people to use apps in the language of their choice

Just like you use different tools for different tasks, we know (because we do it ourselves) people often select a specific language for a particular situation. Rather than guessing preferences, we launched the ability to easily change the language of Google Assistant and Discover to be different from the phone language. Today in India, more than 50 percent of the content viewed on Google Discover is in Indian languages. A third of Google Assistant users in India are using it in an Indian language, and since the launch of Assistant language picker, queries in Indian languages have doubled.

Maps will now able people to select up to nine Indian languages

We are now extending this ability to Google Maps, where users can quickly and easily change their Maps experience into one of nine Indian languages, by simply opening the app, going to Settings, and tapping ‘App language’. This will allow anyone to search for places, get directions and navigation, and interact with the Map in their preferred local language.

Homework help in Hindi (and English)

Meaning is also communicated with images: and this is where Google Lens can help. From street signs to restaurant menus, shop names to signboards, Google Lens lets you search what you see, get things done faster, and understand the world around you—using just your camera or a photo. In fact more people use Google Lens in India every month than in any other country worldwide. As an example of its popularity, over 3 billion words have been translated in India with Lens in 2020.

Lens is particularly helpful for students wanting to learn about the world. If you’re a parent, you’ll be familiar with your kids asking you questions about homework. About stuff you never thought you’d need to remember, like... quadratic equations.

Google Lens can now help you solve math problems by simply pointing your camera 

Now, right from the Search bar in the Google app, you can use Lens to snap a photo of a math problem and learn how to solve it on your own, in Hindi (or English). To do this, Lens first turns an image of a homework question into a query. Based on the query, we will show step-by-step guides and videos to help explain the problem.

Helping computer systems understand Indian languages at scale

At Google Research India, we have spent a lot of time helping computer systems understand human language. As you can imagine, this is quite an exciting challenge.The new approach we developed in India is called Multilingual Representations for Indian Languages (or ‘MuRIL’). Among many other benefits of this powerful multilingual model that scales across languages, MuRIL also provides support for transliterated text such as when writing Hindi using Roman script, which was something missing from previous models of its kind. 

One of the many tasks MuRIL is good at, is determining the sentiment of the sentence. For example, “Achha hua account bandh nahi hua” would previously be interpreted as having a negative meaning, but MuRIL correctly identifies this as a positive statement. Or take the ability to classify a person versus a place: ‘Shirdi ke sai baba’ would previously be interpreted as a place, which is wrong, but MuRIL correctly interprets it as a person.

MuRIL currently supports 16 Indian languages as well as English -- the highest coverage for Indian languages among any other publicly available model of its kind.

MuRIL is free & Open Source,

available on TensorFlow Hub

https://tfhub.dev/google/MuRIL/1



We are thrilled to announce that we have made MuRIL open source, and it is currently available to download from the TensorFlow Hub, for free. We hope MuRIL will be the next big evolution for Indian language understanding, forming a better foundation for researchers, students, startups, and anyone else interested in building Indian language technologies, and we can’t wait to see the many ways the ecosystem puts it to use.

We’re sharing this to provide a flavor of the depth of work underway -- and which is required -- to really make a universally potent and accessible Internet a reality. This said, the Internet in India is the sum of the work of millions of developers, content creators, news media and online businesses, and it is only when this effort is undertaken at scale by the entire ecosystem, that we will help fulfil the truly meaningful promise of the billionth Indian coming online.

Posted by the Google India team


Google Nest Device Access Console now available for partners and individuals

Posted by Gabriel Rubinsky, Senior Product Manager

Today, we’re excited to announce the Device Access Console is available.

The Device Access program lets individuals and qualified partners securely access and control Nest products with their apps and solutions.

At the heart of the Device Access program is the Smart Device Management API. Since we announced the program, Alarm.com, Control4, DISH, OhmConnect, NRG Energy, and Vivint Smart Home have successfully completed the Early Access Program (EAP) with Nest thermostat, camera, or doorbell traits. In the coming months, we expect additional devices to be supported and more smart home partners to launch their new integrations as well.

Enhanced privacy and security

The Device Access program is built on a foundation of privacy and security. The program requires partner submission of qualified use cases and completion of a security assessment before being allowed to utilize the Smart Device Management API for commercial use. The program process gives our users the confidence that commercial partners offering integrated Nest solutions have data protections and safeguards in place that meet our privacy and security standards.

Nest device access and control

The Device Access program currently allows qualified partners to integrate directly with Nest devices, enable control of thermostats, access and view camera feeds, and receive doorbell notifications with images. All qualified partner solutions and services will require end-user consent before being able to access, control, and manage Nest devices as part of their service offerings, either through a partner client app or service platform. Ultimately, this gives users more choice in how to control their home and their own generated data.

If you’re a developer or a Nest user interested in the Device Access program or access to the sandbox development environment,* you can find more information on our Device Access site.

  • Device Access for Commercial Developers

    The Device Access program allows trusted partners to offer access, management, and control of Nest devices within the partner’s app, solution, and ecosystem. It allows developers to test all API traits in the sandbox environment, before moving forward with commercial integration. Learn more

  • Device Access for Individuals

    For individual smart home developer enthusiasts, you can register to access the sandbox development environment, allowing you to directly control your own Nest devices through your private integrations and automations. Learn more

We’re doing the work to make Nest devices more secure and protect user privacy long into the future. This means expanding privacy and data security programs, and delivering flexibility for our customers to use thousands of products from partners to create a connected, helpful home.



* Registration consists of the acceptance of the Google API and Nest Device Access Sandbox Terms of Service, along with a one-time, non-refundable nominal fee per account

Google Nest Device Access Console now available for partners and individuals

Posted by Gabriel Rubinsky, Senior Product Manager

Today, we’re excited to announce the Device Access Console is available.

The Device Access program lets individuals and qualified partners securely access and control Nest products with their apps and solutions.

At the heart of the Device Access program is the Smart Device Management API. Since we announced the program, Alarm.com, Control4, DISH, OhmConnect, NRG Energy, and Vivint Smart Home have successfully completed the Early Access Program (EAP) with Nest thermostat, camera, or doorbell traits. In the coming months, we expect additional devices to be supported and more smart home partners to launch their new integrations as well.

Enhanced privacy and security

The Device Access program is built on a foundation of privacy and security. The program requires partner submission of qualified use cases and completion of a security assessment before being allowed to utilize the Smart Device Management API for commercial use. The program process gives our users the confidence that commercial partners offering integrated Nest solutions have data protections and safeguards in place that meet our privacy and security standards.

Nest device access and control

The Device Access program currently allows qualified partners to integrate directly with Nest devices, enable control of thermostats, access and view camera feeds, and receive doorbell notifications with images. All qualified partner solutions and services will require end-user consent before being able to access, control, and manage Nest devices as part of their service offerings, either through a partner client app or service platform. Ultimately, this gives users more choice in how to control their home and their own generated data.

If you’re a developer or a Nest user interested in the Device Access program or access to the sandbox development environment,* you can find more information on our Device Access site.

  • Device Access for Commercial Developers

    The Device Access program allows trusted partners to offer access, management, and control of Nest devices within the partner’s app, solution, and ecosystem. It allows developers to test all API traits in the sandbox environment, before moving forward with commercial integration. Learn more

  • Device Access for Individuals

    For individual smart home developer enthusiasts, you can register to access the sandbox development environment, allowing you to directly control your own Nest devices through your private integrations and automations. Learn more

We’re doing the work to make Nest devices more secure and protect user privacy long into the future. This means expanding privacy and data security programs, and delivering flexibility for our customers to use thousands of products from partners to create a connected, helpful home.



* Registration consists of the acceptance of the Google API and Nest Device Access Sandbox Terms of Service, along with a one-time, non-refundable nominal fee per account