Tag Archives: dfp_api

Announcing the DFP API Playground GitHub project



As many of you are well aware, the DFP API Playground is a great tool to explore the DFP API. With it, you can easily test PQL statements and examine the JSON equivalent of the objects you fetch. Best of all, it acts as a full reference implementation of how you should integrate the API with App Engine, touching on features such as OAuth2 authentication, task queues, and channels.

Now, we are announcing the new GitHub project for the DFP API Playground, featuring all the benefits of being hosted on GitHub, improved getting started instructions, and revamped project downloads.

Taking it for a spin

The first thing you’ll notice when visiting the project is that we’ve made it incredibly easy to get up and running.

With Maven

If you are a Maven user, it’s as simple as modifying the appengine-web.xml file and running
 mvn appengine:devserver

We’ve also included an m2e project so developing with Eclipse is easier than ever. Just import the extracted dfp-playground-maven-project download into Eclipse, modify the appengine-web.xml file (in the src/main/webapp/WEB-INF directory), and run the DevAppServer.launch profile in the eclipse-launch-profiles folder.

With Google Plugin for Eclipse

We also heard you loud and clear that not everyone uses Maven, so we’ve added a Google Plugin for Eclipse project download which includes all jar dependencies. As with the Maven project, just extract the dfp-playground-jars-and-google-eclipse-plugin-project download, import it into Eclipse, add Google App Engine functionality to the project, modify the appengine-web.xml file (in the war/WEB-INF directory), and run the project like any other App Engine project.

If you’d like to learn more, take a look at the README. As always, we are open to any feedback, so please don’t hesitate to leave us any feature requests in the issues section. Also, if you’d like to contribute to the project, we welcome any patches (just make sure you become an official contributor first).

In the coming months, we’ll be adding even more functionality to the application, so stay tuned and happy hacking!

Reminder: DFP API v201302 and earlier will be deprecated on August 1, 2014

After successfully removing several older versions of the DFP API earlier this month, we’re continuing our 'Spring Cleaning' by reminding everyone that support for version v201302 or earlier of the DFP API will end on August 1st, 2014. If you are still using one of those versions after that date, all requests will fail and your application will cease to work until you migrate to a supported version. Please reference the release notes for all changes to the API when migrating to a newer version.

In addition, please note that ClientLogin support is being phased out and that you will have to migrate to OAuth 2.0 in order to authenticate starting with v201403. Please reference our OAuth 2.0 implementation guide for help with this process.

If you have any questions about this upcoming change or anything else related to the DFP API, please contact us on the forum or via our Google+ page.

Troubleshooting and error handling – handling exceptions thrown by the DFP API (Part II)

In our previous blog post on exception handling in the DFP API, we went over the first of two major scenarios where adding exception handling can help you troubleshoot issues when they occur. In this blog post, we finish off by discussing how to handle exceptions that occur when retrieving entities.

Retrieving entities

When retrieving entities, you may have run into errors such as ServerError, QuotaError, or client library specific errors like RemoteException. Generally these errors are not caused by user error, but caused by the API server being busy, or by issues with the entities you are retrieving being too large or corrupted. If you’re already following our best practices of paging through your entities with our suggested page size and are still running into these errors from time-to-time, you can tackle these issues by writing code to handle the exceptions.

The general way we recommend you reduce the page size, wait a few seconds, and then retry the request to handle most errors you can receive. If it continues to fail, repeat this process up to five times. This will give you an idea of whether the failure is a temporary server error, or if there is a deeper issue. The following is an example that augments the GetAllLineItemsExample from the Google Ads API Java Client Library showing how to do this. It reduces the page size for each retry by halving it, and starts off the wait time between retries at two seconds, doubling it with each retry.
  private static final int MAX_RETRY_LIMIT = 5;
private static final long BASE_WAIT_TIME = 2000;

public static void runExample(DfpServices dfpServices, DfpSession session)
throws Exception {
int pageLimit = StatementBuilder.SUGGESTED_PAGE_LIMIT;

// Get the LineItemService.
LineItemServiceInterface lineItemService =
dfpServices.get(session, LineItemServiceInterface.class);
// Create a statement to select all line items.
StatementBuilder statementBuilder =
new StatementBuilder().orderBy("id ASC").limit(pageLimit).offset(0);

// Default for total result set size.
int totalResultSetSize = 0, retryCount = 0;
LineItemPage page = null;
long waitTime = BASE_WAIT_TIME;

do {
// Reset these values for each page.
retryCount = 0;
pageLimit = StatementBuilder.SUGGESTED_PAGE_LIMIT;
waitTime = BASE_WAIT_TIME;
page = null;

do {
try {
System.out.printf(
"Getting line items with page size of %d at offset %d (attempt " +
"%d)...\n", pageLimit,
statementBuilder.getOffset(), retryCount + 1);
// Get line items by statement.
page = lineItemService.getLineItemsByStatement(
statementBuilder.toStatement());
System.out.printf("Attempt %d succeeded.\n\n", retryCount + 1);
} catch (Exception e) {
pageLimit /= 2;
System.out.printf(
"Attempt %d failed. Retrying in %d milliseconds with page size " +
"of %d...\n",
retryCount + 1, waitTime, pageLimit);
Thread.sleep(waitTime);
waitTime *= 2;
statementBuilder.limit(pageLimit);
retryCount++;
}
} while (page == null && retryCount < MAX_RETRY_LIMIT);

if (page != null) {
totalResultSetSize = page.getTotalResultSetSize();
// Your code to handle the line item page, e.g., save it to a database.
} else {
throw new Exception(String.format(
"Failed to get line items at offset %s with page size %d after " +
"%d attempts.",
statementBuilder.getOffset(), pageLimit, retryCount));
}

statementBuilder.increaseOffsetBy(pageLimit);
} while (statementBuilder.getOffset() < totalResultSetSize);

System.out.printf("Number of results found: %d\n", totalResultSetSize);
}
Using the above code, the output would look like the following if the first three attempts failed, but the fourth one succeeded for the first page.
Getting line items with page size of 500 at offset 0 (attempt 1)...
Attempt 1 failed. Retrying in 2000 milliseconds with page size of 250...
Getting line items with page size of 250 at offset 0 (attempt 2)...
Attempt 2 failed. Retrying in 4000 milliseconds with page size of 125...
Getting line items with page size of 125 at offset 0 (attempt 3)...
Attempt 3 failed. Retrying in 8000 milliseconds with page size of 62...
Getting line items with page size of 62 at offset 0 (attempt 4)...
Attempt 4 succeeded.

Getting line items with page size of 500 at offset 62 (attempt 1)...
Attempt 1 succeeded.
If you’re getting exceptions that weren't addressed by our examples and are unclear on how to diagnose them, then you can always write to us on the API forums. Include the requestId of the call that failed, especially in the case that you receive a SERVER_ERROR.

Troubleshooting and error handling – handling exceptions thrown by the DFP API (Part I)

We often get a lot of questions about running into exceptions when using the DFP API and what to do about them. Seeing an exception from time-to-time is a normal part of the API workflow and should be expected. Some exceptions are caused by user errors, others by server issues. We strongly recommend that you write code to handle exceptions early on so that it’s easier to troubleshoot issues when they occur. In this blog post, we’ll go over the first of two major scenarios where adding exception handling will benefit you.

Creating or updating entities

When creating or updating entities with the DFP API, if you forget to supply a required field or set a field to an invalid value, the API throws an ApiException. If you write code to catch this exception, you can print out the error message to show the root cause. The following is an example showing how to do this when using the Google Ads API Java Client Library to create line items.

  // Create a line item.
LineItem lineItem = new LineItem();
// lineItem.setSomeThings(...);

try {
// Create the line item on the server.
lineItemService.createLineItems(new LineItem[] {lineItem});
} catch (ApiException e) {
ApiError[] apiErrors = e.getErrors();
for (ApiError apiError : apiErrors) {
StringBuilder errorMessage = new StringBuilder();
errorMessage.append(String.format(
"There was an error of type '%s', on the field '%s',"
+ "caused by an invalid "
+ "value '%s', with the error message '%s'",
apiError.getApiErrorType(), apiError.getFieldPath(),
apiError.getTrigger(), apiError.getErrorString()));
if (apiError instanceof NotNullError) {
errorMessage.append(String.format(", with the reason '%s'.",
((NotNullError) apiError).getReason()));
} else if (apiError instanceof FrequencyCapError) {
errorMessage.append(String.format(", with the reason '%s'.",
((FrequencyCapError) apiError).getReason()));
}

// Append details of other ApiErrors that you are interested in.

System.err.println(errorMessage.toString());
}
}
If you use this code to create a line item, it prints the following if you don’t specify an order ID:
There was an error of type 'NotNullError', on the field 'lineItem[0].orderId',
caused by an invalid value '', with the error message 'NotNullError.NULL',
with the reason 'NULL'.
If you provide invalid values for the frequency caps field when creating a line item, you’ll get the following:
There was an error of type 'FrequencyCapError', on the field
'lineItem[0].frequencyCaps', caused by an invalid value '1000', with the
error message 'FrequencyCapError.RANGE_LIMIT_EXCEEDED', with the reason
'RANGE_LIMIT_EXCEEDED'.
There was an error of type 'FrequencyCapError', on the field
'lineItem[0].frequencyCaps', caused by an invalid value '0', with the error
message 'FrequencyCapError.IMPRESSIONS_TOO_LOW', with the reason
'IMPRESSIONS_TOO_LOW'.
Depending on the type of application you’re writing, you may want to show these error messages to your user in a UI or log them. Note that you can find an entity’s required fields and valid field values in our reference documentation, for example, LineItem.

If you’re getting exceptions when creating or updating entities that are not ApiException errors (such as a SERVER_ERROR) and are unclear on how to diagnose them, then you can always write to us on the API forums and include the requestId of the call that failed and we will help you diagnose the issue.

Look forward to part II of this blog post where we will discuss handling exceptions that are thrown when retrieving entities.

A new way to access our Ads SOAP APIs through Python

The Ads APIs Python client library, adspygoogle, has been around for quite some time, supporting versions of Python as old as 2.4 but capping out at 2.7. We’ve been getting more and more feedback recently that our users want Python 3 support. Also, many of adspygoogle’s dependencies are dated and no longer officially supported. We heard you, and with these items in mind...

A New Client Library

A completely new client library — googleads — is now available for Python 3 as well as Python 2.7. The new library has several advantages over our existing library:

  • Most of your code from the previous Python library will work with minimal modifications, listed below and in our migration guide.
  • The dependencies are all hosted on PyPI, so you do not need to use the --allow-external or --allow-unverified flags at install.
  • The constructors and attributes in the new library give you more control over client objects; for example, you can easily switch out OAuth 2.0 credentials and manage multiple accounts.
  • Data types are retained. Whereas adspygoogle uses strings for everything, googleads can send and receive numbers, booleans, datetimes, etc.
  • The library is more integrated with the Python standard library; for example, you can use the built-in logging framework to log SOAP messages.
  • The library is built on top of a fork of suds and allows users who are familiar with suds to take advantage of that library’s features.

Migrating to the New Client Library

Existing Python users can retain almost all of their logic working with the objects defined in our APIs. An important difference is that your responses from the API are now objects returned by suds instead of dictionaries. The objects support using dictionary syntax to retrieve values but you cannot use dictionary methods on them. Most importantly, this means that .get() and .update() are no longer supported. Where in adspygoogle you may have done this:

response = inventory_service.GetAdUnitsByStatement(statement.ToStatement())[0] ad_units = response.get('results')

You will now need to do this:

response = inventory_service.getAdUnitsByStatement(statement.ToStatement()) ad_units = response['results'] if 'results' in response else None

Some more minor changes that need to be made include changing your code to use the new methods for instantiating client and service objects and using the exact method names from our APIs, which are generally lower camel case.

For more information on migration, check out the migration guide we have posted in the new library’s wiki section.

The googleads library will be the primary focus of development moving forward. The existing adspygoogle library is now in maintenance mode but we will continue to add support for new AdWords and DFP API releases through December, 2014.

If you find any bugs, have a patch to contribute or just a feature request, please feel free to file an issue on the issue tracker.

New Statement Builder Features in the DFP .NET Library

Good news for .NET DFP developers - working with PQL just got easier. Additional features have just been added to the statement builder in the .NET client library. You can now use a more fluent interface to construct and modify PQL statements in parts. This allows for better query validation and code that's easier to construct:

// Using the statement builder with the PQL service
StatementBuilder statementBuilder = new StatementBuilder()
.Select("Id, Name")
.From("Line_Item")
.Where("isMissingCreatives = :isMissingCreatives")
.OrderBy("id ASC")
.Limit(StatementBuilder.SUGGESTED_PAGE_LIMIT)
.AddValue("isMissingCreatives", true);

These changes also make modifying and reusing your statements much easier. For example, when paging through result sets you can update the offset without having to alter the entire query. This makes paging quick and easy.

// Using the statement builder with getCreativesByStatement
StatementBuilder statementBuilder = new StatementBuilder()
.Where("advertiser_id = :advertiserId")
.OrderBy("id ASC")
.Limit(StatementBuilder.SUGGESTED_PAGE_LIMIT)
.AddValue("advertiserId", myAdvertiserId);

CreativePage page;
do {
page = creativeService.getCreativesByStatement(
statementBuilder.ToStatement());
if(page.results != null) {
foreach (Creative creative in page.results) {
Console.WriteLine("Creative ID: {0}", creative.id);
}
}
statementBuilder.IncreaseOffsetBy(StatementBuilder.SUGGESTED_PAGE_LIMIT);
} while (statementBuilder.GetOffset() < page.totalResultSetSize);

Migration

If you're already using the statement builder, no immediate changes are needed. The previous functionality is still available, although it is now marked as obsolete.

The only pitfall comes with mixing functionality. If you set the statement builder's query directly, don't attempt to set parts of the query individually. Likewise, if you're building the query in parts, direct access is not allowed.

Stick with one usage or the other - mixing the two will give you an IllegalOperationException.

For more information, you can check out the out the source or examples. If you have any questions about the new statement builder features, feel free to ask us in the developer forum or on our Google+ Developers page.

Announcing v201403 of the DFP API

Today, we launch v201403 of the DFP API. This important release focuses our API by removing non-bulk methods and introduces some crowd favorite features like creating VAST video redirect creatives. A detailed list of these features and what’s changed can be found on our release notes page. Also, stay tuned for a special hangout on March 24th where we’ll go over all of these new features and discuss some common topics brought up on our forum.

Bulking up

Starting in v201403, we've removed all non-bulk methods from the API, such as OrderService.getOrder and LineItemService.createLineItem. The goal of this is to encourage you to think about your applications holistically. Instead of fetching a single order, sync orders regularly using lastModifiedDateTime and instead of creating or updating a single line item, group them together, e.g. create all orders first, then line items, and then creatives instead of one at a time.

If you do need to fetch a single object, we recommend following the pattern (in Java):


// Create a statement to only select a single
// order by ID.
StatementBuilder statementBuilder =
new StatementBuilder()
.where("id = :id")
.orderBy("id ASC")
.limit(1)
.withBindVariableValue("id", orderId);

// Get the order.
OrderPage page = orderService.
getOrdersByStatement(statementBuilder.toStatement());

Order order = Iterables.getOnlyElement(
Arrays.asList(page.getResults()));
Notice that you can easily turn this into a function if needed, and possibly implement a caching mechanism.

Creating video creatives

This launch marks our first steps into opening up the writability of video creatives. We are starting with VastRedirectCreatives, video creatives that store the VAST 2 or 3 tag URL on an external server. All other types of video creatives are still read-only for now. We will be looking into opening more externally hosted video creatives in the coming versions and please do let us know what you are looking for on our forum.

ClientLogin gets the axe

You will notice that starting in v201403, you must use OAuth2 to authenticate with the API. Using ClientLogin will result in an error from our servers. We've modified all of our client libraries to throw exceptions if you are using ClientLogin with any non-compatible version. If you need a refresher on how to use OAuth2, see a guide for Java, PHP, .NET, Python, or Ruby.

Look out, deprecation ahead

We are getting closer to the deprecation of v201206, v201204, v201203, v201201, v201111, and v201108. These versions will be turned off on April 1st. If you have not yet, please upgrade to v201403 and OAuth2 as soon as possible. As a Java developer, you will also notice that v201403 is only available in the new Java client library and the old Java library will not receive any further feature enhancements.

As always, if you have any suggestions or questions about the new version, feel free to drop us a line on our Ads Developer Google+ page.

 - , DFP API Team

New PQL tables in the DFP API

In the recent DFP API releases, we announced the addition of more tables to the PublisherQueryLanguageService, starting with Line_Item and Ad_Unit. These tables are an alternative to retrieving entities from their respective services’ get***ByStatement methods. They allow you to retrieve sparse entities containing only the fields you’re interested in. For example, the following select statement retrieves the first page of only the ID and name of line items that are missing creatives.
SELECT Id, Name from Line_Item WHERE IsMissingCreatives = true LIMIT 500 OFFSET 0
In this blog post, we’ll go over some situations where this feature can be utilized to speed up entity retrieval times from hours to minutes.

Entity synchronization


The first major use case that benefits from these new tables is entity synchronization. For example, if you’re synchronizing line items on your network into a local database, you’re most likely using LineItemService.getLineItemsByStatement and hopefully taking advantage of the LineItem.lastModifiedDateTime field to only filter out line items that have changed since the last time you synchronized. But even with lastModifiedDateTime, this synchronization can still take a while, depending on how many line items you have on your network, and how complex their targetings are. If you don’t need to synchronize all the fields in your line item objects, you may be able to use the Line_Item PQL table to perform this synchronization instead.

If you do need to synchronize fields not yet available in the Line_Item table, such as targeting, you can still take advantage of this table for computed fields that don’t affect lastModifiedDateTime, such as LineItem.status. What you can do is synchronize your line items as usual with getLineItemsByStatement filtering on lastModifiedDateTime. Then update your local statuses with selected line item statuses from the Line_Item table (a very quick process):
SELECT Id, Status from Line_Item LIMIT 500 OFFSET 0

Match tables for reports


Local copies of line item information can also be used as match tables to construct more detailed reports. Sometimes, you may want more information in your reports than what is currently available as a dimensionAttribute. For example, if you run a report by line item ID, you may also want other line item information like isMissingCreatives to show in the report. Because LineItem.isMissingCreatives is unavailable as a DimensionAttribute, you can create a local match table containing line item IDs and additional columns to be included in the report. Then you can merge this match table with the report by the line item ID to obtain a report with those additional columns.

For example, let’s say you run a report with the following configuration:
Dimension.LINE_ITEM_ID
DimensionAttribute.LINE_ITEM_COST_TYPE
Column.AD_SERVER_IMPRESSIONS
The report in CSV_DUMP format looks something like this:
Dimension.LINE_ITEM_ID, DimensionAttribute.LINE_ITEM_COST_TYPE,
Column.AD_SERVER_IMPRESSIONS
1234567, CPM, 206
1234568, CPD, 45
1234569, CPD, 4
To also include LineItem.isMissingCreatives in the report, you would fetch a match table and save it (as a CSV file for example) by retrieving ID and isMissingCreatives from the Line_Item table.
SELECT Id, IsMissingCreatives from Line_Item LIMIT 500 OFFSET 0
Full examples of how to fetch match tables are available in all our client libraries. For instance, Python’s is here. Then using a script or a spreadsheet program, merge the match table with the report to produce something like this:
Dimension.LINE_ITEM_ID, DimensionAttribute.LINE_ITEM_COST_TYPE,
Column.AD_SERVER_IMPRESSIONS, LineItem.isMissingCreatives
1234567, CPM, 206, true
1234568, CPD, 45, false
1234569, CPD, 4, false
If you have any questions on these new PQL tables, or suggestions on what PQL tables you want in the next release, please let us know on the API forum, or on our Google+ Developers page.

Smarter Querying using Pagination with the DFP API

As your networks grow, so does their data in the DFP servers. While previously making requests for tens of line items, you now find yourself requesting tens of thousands of line items. Of course, with more data comes more responsibility - your requests are now taking longer and the response sizes have increased accordingly. You notice that some of your requests are now returning with 'ServerError.SERVER_ERROR.' Things might seem hopeless, but don’t panic...

Many of these problems can be solved with pagination! What does this mean from a developer's perspective? In a large number of implementations, what we've noticed is that applications will make requests with empty statements to calls like this:
getCreativesByStatement(" ")
getLineItemsByStatement(" ")
getOrdersByStatement(" ")
getCustomTargetingValuesByStatement(" ")
These requests do not limit the size of the returned result set. In doing so, the applications are asking for the data of every single object belonging to that service. When you’re talking about thousands of line items, each with their own distinct custom targeting, the amount of data will often cause the request to fail.

The fix? When creating PQL statements to query for DFP objects, you’ll find our client libraries all utilize a recommended page size (500) to limit your queries to smaller batches using the 'LIMIT' keyword, which should feel familiar for most who've used SQL. After the first page has returned successfully, you can then use the 'OFFSET' keyword to retrieve each subsequent page until your request returns nothing. If the calls still seem to take a long time to return a page or fail at this point, you can try to use a smaller page size.

If you use pagination to retrieve data, you not only get the benefit of increased reliability, but also protect yourself should something go wrong. Instead of retrying the entire request from the start again, you can simply pick up where you left off.

To see how to implement pagination logic, you can find examples in each of our client libraries:
Ruby
Java
PHP
Python
Dotnet
If you have any questions on using pagination with your queries, post them on the API forum or Google+ Developers page.

 - , DFP API Team

Ads client libraries are now on GitHub!

We have moved all our client libraries to GitHub. Here’s a complete list of all our client libraries, with their new locations:



We have also moved all Wiki pages and open issues to the corresponding GitHub projects. You can find downloads for a project under its releases page. All future client library releases will be published on GitHub only, so make sure you follow us on GitHub to keep track of new releases and report issues. If you tracked the git repository on code.google.com for a client library, you should update your master branch to point to the GitHub repository instead.

If you have any questions about the client libraries, you can post them on our forums. Check out our Google+ page for client libraries and Ads APIs updates.