From 985f93aa805aad0a221b43a6a456dff69231a7d4 Mon Sep 17 00:00:00 2001
From: krithika sreenivasan
2. There’s no magic number, but around 55 characters or less is good.
3. There’s no set syntax, but “Primary Keyword – Secondary Keyword | Brand Name” is good.
- Below are a few articles on optimizing title tags for search engines: -
- -- Nine Best Practices For Optimized < title > Tags -
- -- Title Tag -
- -
-
-
-
+
+ 1. Place page titles in a <title>
tag within the <head>
.
+ 2. There’s no magic number, but around 55 characters or less is good.
+ 3. There’s no set syntax, but “Primary Keyword – Secondary Keyword | Brand Name” is good.
+
+ Below are a few articles on optimizing title tags for search engines: +
+ ++ Nine Best Practices For Optimized < title > Tags +
+ ++ Title Tag +
++ Title: the most important element of a quality Web page +
+ +If you’re interested in learning more about search, register for our Search Is the New Big Data (in-person training) on April 10. -
+ diff --git a/content/news/2014/05/2014-05-19-sign-up-for-digitalgov-citizen-services-summit-friday-may-30.md b/content/news/2014/05/2014-05-19-sign-up-for-digitalgov-citizen-services-summit-friday-may-30.md index afe3ddd8ed..a858ee2b7f 100644 --- a/content/news/2014/05/2014-05-19-sign-up-for-digitalgov-citizen-services-summit-friday-may-30.md +++ b/content/news/2014/05/2014-05-19-sign-up-for-digitalgov-citizen-services-summit-friday-may-30.md @@ -51,7 +51,7 @@ Following an opening keynote by Federal Communications Commission (FCC) CIO, Dav * public private partnerships and * inter-agency work. -These panels will explore how agencies can integrate their [data]({{< ref "/topics/code" >}}data1/), [social media]({{< ref "/topics/social-media" >}}), [user experience]({{< ref "/topics/user-experience" >}}), [mobile development]({{< ref "/topics/mobile" >}}) and other programs in order to achieve the best improvements for citizen services. Confirmed speakers include: +These panels will explore how agencies can integrate their [data]({{< ref "/topics/code" >}}), [social media]({{< ref "/topics/social-media" >}}), [user experience]({{< ref "/topics/user-experience" >}}), [mobile development]({{< ref "/topics/mobile" >}}) and other programs in order to achieve the best improvements for citizen services. Confirmed speakers include: * Jack Bienko, Small Business Administration (SBA) * Denise Shaw, Environmental Protection Agency (EPA) diff --git a/content/news/2014/06/2014-06-03-harnessing-the-power-of-many-digitalgov-summit-panels-recap.md b/content/news/2014/06/2014-06-03-harnessing-the-power-of-many-digitalgov-summit-panels-recap.md index 45a8d5b079..a01cb647b8 100644 --- a/content/news/2014/06/2014-06-03-harnessing-the-power-of-many-digitalgov-summit-panels-recap.md +++ b/content/news/2014/06/2014-06-03-harnessing-the-power-of-many-digitalgov-summit-panels-recap.md @@ -54,5 +54,5 @@ What can we do improve the quality of inter-agency work? Grama thinks it would b Agencies can also strive to think beyond their silos, since ultimately we work for the taxpayer. If you do work for another agency, “the taxpayer benefits even if your own agency doesn’t see the direct benefit,” said Pulsifer. -What has been your experience with inter-agency work?_**Alison Lemon** is a [Knowledge Manager for the SocialGov Community](FIND?s=alison+lemon.md) and a Senior Analyst for Social Media with the **FDA’s Office of Women’s Health**._ +What has been your experience with inter-agency work?_**Alison Lemon** is a [Knowledge Manager for the SocialGov Community]({{}}) and a Senior Analyst for Social Media with the **FDA’s Office of Women’s Health**._ _Thanks to our special Summit blogger, Alison, who took up the Open Opportunities challenge. You can [find more opportunities to participate](http://gsablogs.gsa.gov/dsic/category/open-opportunities/)._ \ No newline at end of file diff --git a/content/news/2014/11/2014-11-04-a-picture-is-worth-a-thousand-tokens-part-ii.md b/content/news/2014/11/2014-11-04-a-picture-is-worth-a-thousand-tokens-part-ii.md index 61ab82a2dd..485c5a18dc 100644 --- a/content/news/2014/11/2014-11-04-a-picture-is-worth-a-thousand-tokens-part-ii.md +++ b/content/news/2014/11/2014-11-04-a-picture-is-worth-a-thousand-tokens-part-ii.md @@ -102,7 +102,7 @@ This populates the bigram field for each index with whatever natural language fi ], "highlight": { "pre_tag": "", - "post_tag": "<\/strong>" + "post_tag": "" } } } diff --git a/content/news/2014/11/2014-11-07-welcome-to-user-experience-month.md b/content/news/2014/11/2014-11-07-welcome-to-user-experience-month.md index 4704f163ca..8120445304 100644 --- a/content/news/2014/11/2014-11-07-welcome-to-user-experience-month.md +++ b/content/news/2014/11/2014-11-07-welcome-to-user-experience-month.md @@ -21,7 +21,7 @@ If you work at a U.S. Post Office, you interact with your customers, talk with t So, in addition to collecting good analytics (like through GSA’s free [Digital Analytics Program]({{< ref "/guides/dap/_index.md" >}} "DAP: Digital Analytics Program")), it’s crucial to understand your how your customers use your technology on a one-to-one basis. That’s why you focus on the User Experience (or UX); a product’s ease-of-use, whether it looks nice or creates any emotional friction, and if people can use it to accomplish something they want. -User Experience is closely related to [Customer Experience]({{< ref "2014-07-07-user-experience-ux-vs-customer-experience-cx-whats-the-dif.md" >}} "User Experience (UX) vs. Customer Experience (CX): What’s the Dif?"), and the [User Experience]({{< ref "digitalgov-user-experience-resources.md" >}} "DigitalGov User Experience Program") program that I manage at GSA helps: build UX teams at agencies across the federal government, them to understand their customers’ needs, and build products centered around them. +User Experience is closely related to [Customer Experience]({{< ref "2014-07-07-user-experience-ux-vs-customer-experience-cx-whats-the-dif.md" >}}), and the [User Experience]({{< ref "digitalgov-user-experience-resources.md" >}} "DigitalGov User Experience Program") program that I manage at GSA helps: build UX teams at agencies across the federal government, them to understand their customers’ needs, and build products centered around them. For this month’s UX theme, we’re hitting this topic from lots of angles: diff --git a/content/news/2014/12/2014-12-23-challenges-round-up.md b/content/news/2014/12/2014-12-23-challenges-round-up.md index 5f1a8e2cde..3f33aa872e 100644 --- a/content/news/2014/12/2014-12-23-challenges-round-up.md +++ b/content/news/2014/12/2014-12-23-challenges-round-up.md @@ -1,50 +1,50 @@ ---- -slug: challenges-round-up -date: 2014-12-23 10:00:54 -0400 -title: 'Challenge & Prize Competition Round-Up' -summary: Recap of the 2014 Challenge and Prize competition events hosted by DigitalGov -authors: - - apiazza -topics: - - challenges - - monthly-theme - - CFPB - - challenge-gov - - challenges-and-prize-competitions - - challenges-and-prizes-community-of-practice - - Consumer Financial Protection Bureau - - crowdsourcing - - nasa - - open-source - - OSTP - - recaps - - white-house-office-of-science-and-technology-policy ---- - -{{< legacy-img src="2014/09/600-x-400-Businessman-Fighting-Bplanet-iStock-Thinkstock-181596463.jpg" alt="Fighting businessmen" caption="" >}} - -We’ve had an excellent year of training and community events for the federal challenge and prize community, so for the month of December DigitalGov University has taken a look at the events we’ve hosted this year and rounded them up in line with this month’s [Crowdsourcing theme]({{< ref "2014-12-08-crowdsourcing-month-an-overview.md" >}} "Crowdsourcing Month: An Overview"). - -On Wednesday, December 10, the [Challenge and Prize Community of Practice]({{< ref "challenges-prizes.md" >}} "Challenges & Prizes Community") hosted its quarterly in-person meeting to highlight the roles and responsibilities that [Challenge.gov](https://www.challenge.gov/), the Office of Science and Technology Policy (OSTP) at the White House, NASA’s Center of Excellence for Collaborative Innovation (CoECI) and federal agencies play in hosting and executing challenge and prize competitions. There were two panels of experienced managers that gave attendees insight into the institutionalization of prizes at their agencies and what makes-up a successful challenge. Finally, attendees were also able to garner exactly what they need to provide for the America COMPETES annual Congressional reporting. - -This event brought together more than 70 practitioners in-person and online, and we’ll be sharing the clips from the livestream in the very near future. Now here’s a look at our other events. - -{{< legacy-img src="2014/04/600-x-165-ChallengeGov-logo.jpg" alt="Full logo for Challenge.gov with the tagline: Government Challenges, Your Solutions." >}} - -## The New Challenge.gov - -[Challenge.gov added new features recently]({{< ref "2014-10-02-introducing-the-new-challenge-gov.md" >}} "Introducing the New Challenge.gov") and has pulled the full federal list and competition management tools onto the new platform. It now enables agencies to create and manage their competitions on a robust back-end platform. You can [learn more about the Challenge.gov platform](http://youtu.be/Yw58jVvppAw?list=PLd9b-GuOJ3nFeJeAHAn3Z5opohjxIw8OC) and [how to use Challenge.gov](https://www.youtube.com/watch?v=qXYar-2de44&index=1&list=PLd9b-GuOJ3nFeJeAHAn3Z5opohjxIw8OC) in these two webinars. - -## Open-Sourced Ideation Platform - -The Consumer Financial Protection Bureau (CFPB) also presented their [open sourced ideation tool, IdeaBox](http://youtu.be/KRQ24645LOE?list=PLd9b-GuOJ3nFeJeAHAn3Z5opohjxIw8OC), to the community in October of this year. This event showcased how you can build an innovation program by leveraging CFPB’s open-source ideation platform, replicating their lightweight staffing model, and using their playbook of resources. They use the platform to crowdsource ideas from employees across the agency. IdeaBox source code is shared openly on Github for anybody to use. - -## Getting Started - -But, if you are just getting started hosted some events that will jump-start your thinking and help you develop a plan of attack. First, we hosted **Cristin Dorgelo**, former Assistant Director for Grand Challenges at the Office of Science Technology and Policy at the White House, to give a [rundown of how challenge and prize initiatives can benefit your agency](http://youtu.be/Frwk3Fvw_H4?list=PLd9b-GuOJ3nFeJeAHAn3Z5opohjxIw8OC) and steps you need to take to get started. - -You may also be interested in watching [Why Your Challenge & Prize Competition Needs a Communication Strategy](http://youtu.be/wieTrYMT4zM?list=PLd9b-GuOJ3nFeJeAHAn3Z5opohjxIw8OC). In this event, learn why a robust communications plan is essential to a successful challenge, what methods of communication you should think about, what has worked and not worked for [The Desal Prize](http://www.securingwaterforfood.org/the-desal-prize/), and how you can structure your prize competition communication strategy. - -If you are thinking about launching a video competition then you may be interested in watching [Running a Successful Video Challenge](https://www.youtube.com/watch?v=kaK90anXf7w&index=7&list=PLd9b-GuOJ3nFeJeAHAn3Z5opohjxIw8OC). The presenter for this event, **Jason Crusan**, Director CoECI NASA, presents a case study of how NASA has used professional crowdsourcing for video creation. **Tammi Marcoullier**, Challenge.gov Program Manager, reviews getting from A to B, or how to decide what kind of video challenge you want to execute by examining your goals. - -Finally, you can take a look at the [summary of our event on Design Thinking](http://youtu.be/oLAtcfGCcdc?list=PLd9b-GuOJ3nFeJeAHAn3Z5opohjxIw8OC) and how this workshop helped folks working on challenge and prize competitions think through the design and execution of their challenge. Enjoy! For more events around challenge and prize competitions check out our [Events Calendar]({{< ref "/events" >}}. For questions about Challenge.gov or the [Challenge & Prize Community of Practice]({{< ref "challenges-prizes.md" >}} "Challenges & Prizes Community") email
- 1. Place page titles in a <title>
tag within the <head>
.
- 2. There’s no magic number, but around 55 characters or less is good.
- 3. There’s no set syntax, but “Primary Keyword – Secondary Keyword | Brand Name” is good.
-
- Below are a few articles on optimizing title tags for search engines: -
- -- Nine Best Practices For Optimized < title > Tags -
- -- Title Tag -
-- Title: the most important element of a quality Web page -
- -+ +1. Place page titles in a + +
+ Below are a few articles on optimizing title tags for search engines: +
+ ++ Nine Best Practices For Optimized < title > Tags +
+ ++ Title Tag +
+ +
+
+
+
If you’re interested in learning more about search, register for our Search Is the New Big Data (in-person training) on April 10. -
+ diff --git a/content/news/2014/05/2014-05-19-sign-up-for-digitalgov-citizen-services-summit-friday-may-30.md b/content/news/2014/05/2014-05-19-sign-up-for-digitalgov-citizen-services-summit-friday-may-30.md index a858ee2b7f..afe3ddd8ed 100644 --- a/content/news/2014/05/2014-05-19-sign-up-for-digitalgov-citizen-services-summit-friday-may-30.md +++ b/content/news/2014/05/2014-05-19-sign-up-for-digitalgov-citizen-services-summit-friday-may-30.md @@ -51,7 +51,7 @@ Following an opening keynote by Federal Communications Commission (FCC) CIO, Dav * public private partnerships and * inter-agency work. -These panels will explore how agencies can integrate their [data]({{< ref "/topics/code" >}}), [social media]({{< ref "/topics/social-media" >}}), [user experience]({{< ref "/topics/user-experience" >}}), [mobile development]({{< ref "/topics/mobile" >}}) and other programs in order to achieve the best improvements for citizen services. Confirmed speakers include: +These panels will explore how agencies can integrate their [data]({{< ref "/topics/code" >}}data1/), [social media]({{< ref "/topics/social-media" >}}), [user experience]({{< ref "/topics/user-experience" >}}), [mobile development]({{< ref "/topics/mobile" >}}) and other programs in order to achieve the best improvements for citizen services. Confirmed speakers include: * Jack Bienko, Small Business Administration (SBA) * Denise Shaw, Environmental Protection Agency (EPA) diff --git a/content/news/2014/06/2014-06-03-harnessing-the-power-of-many-digitalgov-summit-panels-recap.md b/content/news/2014/06/2014-06-03-harnessing-the-power-of-many-digitalgov-summit-panels-recap.md index a01cb647b8..45a8d5b079 100644 --- a/content/news/2014/06/2014-06-03-harnessing-the-power-of-many-digitalgov-summit-panels-recap.md +++ b/content/news/2014/06/2014-06-03-harnessing-the-power-of-many-digitalgov-summit-panels-recap.md @@ -54,5 +54,5 @@ What can we do improve the quality of inter-agency work? Grama thinks it would b Agencies can also strive to think beyond their silos, since ultimately we work for the taxpayer. If you do work for another agency, “the taxpayer benefits even if your own agency doesn’t see the direct benefit,” said Pulsifer. -What has been your experience with inter-agency work?_**Alison Lemon** is a [Knowledge Manager for the SocialGov Community]({{}}) and a Senior Analyst for Social Media with the **FDA’s Office of Women’s Health**._ +What has been your experience with inter-agency work?_**Alison Lemon** is a [Knowledge Manager for the SocialGov Community](FIND?s=alison+lemon.md) and a Senior Analyst for Social Media with the **FDA’s Office of Women’s Health**._ _Thanks to our special Summit blogger, Alison, who took up the Open Opportunities challenge. You can [find more opportunities to participate](http://gsablogs.gsa.gov/dsic/category/open-opportunities/)._ \ No newline at end of file diff --git a/content/news/2014/11/2014-11-04-a-picture-is-worth-a-thousand-tokens-part-ii.md b/content/news/2014/11/2014-11-04-a-picture-is-worth-a-thousand-tokens-part-ii.md index 485c5a18dc..61ab82a2dd 100644 --- a/content/news/2014/11/2014-11-04-a-picture-is-worth-a-thousand-tokens-part-ii.md +++ b/content/news/2014/11/2014-11-04-a-picture-is-worth-a-thousand-tokens-part-ii.md @@ -102,7 +102,7 @@ This populates the bigram field for each index with whatever natural language fi ], "highlight": { "pre_tag": "", - "post_tag": "" + "post_tag": "<\/strong>" } } } diff --git a/content/news/2014/11/2014-11-07-welcome-to-user-experience-month.md b/content/news/2014/11/2014-11-07-welcome-to-user-experience-month.md index 8120445304..4704f163ca 100644 --- a/content/news/2014/11/2014-11-07-welcome-to-user-experience-month.md +++ b/content/news/2014/11/2014-11-07-welcome-to-user-experience-month.md @@ -21,7 +21,7 @@ If you work at a U.S. Post Office, you interact with your customers, talk with t So, in addition to collecting good analytics (like through GSA’s free [Digital Analytics Program]({{< ref "/guides/dap/_index.md" >}} "DAP: Digital Analytics Program")), it’s crucial to understand your how your customers use your technology on a one-to-one basis. That’s why you focus on the User Experience (or UX); a product’s ease-of-use, whether it looks nice or creates any emotional friction, and if people can use it to accomplish something they want. -User Experience is closely related to [Customer Experience]({{< ref "2014-07-07-user-experience-ux-vs-customer-experience-cx-whats-the-dif.md" >}}), and the [User Experience]({{< ref "digitalgov-user-experience-resources.md" >}} "DigitalGov User Experience Program") program that I manage at GSA helps: build UX teams at agencies across the federal government, them to understand their customers’ needs, and build products centered around them. +User Experience is closely related to [Customer Experience]({{< ref "2014-07-07-user-experience-ux-vs-customer-experience-cx-whats-the-dif.md" >}} "User Experience (UX) vs. Customer Experience (CX): What’s the Dif?"), and the [User Experience]({{< ref "digitalgov-user-experience-resources.md" >}} "DigitalGov User Experience Program") program that I manage at GSA helps: build UX teams at agencies across the federal government, them to understand their customers’ needs, and build products centered around them. For this month’s UX theme, we’re hitting this topic from lots of angles: diff --git a/content/news/2014/12/2014-12-23-challenges-round-up.md b/content/news/2014/12/2014-12-23-challenges-round-up.md index 3f33aa872e..5f1a8e2cde 100644 --- a/content/news/2014/12/2014-12-23-challenges-round-up.md +++ b/content/news/2014/12/2014-12-23-challenges-round-up.md @@ -1,50 +1,50 @@ ---- -slug: challenges-round-up -date: 2014-12-23 10:00:54 -0400 -title: 'Challenge & Prize Competition Round-Up' -summary: Recap of the 2014 Challenge and Prize competition events hosted by DigitalGov -authors: - - apiazza -topics: - - challenges - - monthly-theme - - CFPB - - challenge-gov - - challenges-and-prize-competitions - - challenges-and-prizes-community-of-practice - - Consumer Financial Protection Bureau - - crowdsourcing - - nasa - - open-source - - OSTP - - recaps - - white-house-office-of-science-and-technology-policy ---- - -{{< legacy-img src="2014/09/600-x-400-Businessman-Fighting-Bplanet-iStock-Thinkstock-181596463.jpg" alt="Fighting businessmen" caption="" >}} - -We’ve had an excellent year of training and community events for the federal challenge and prize community, so for the month of December DigitalGov University has taken a look at the events we’ve hosted this year and rounded them up in line with this month’s [Crowdsourcing theme]({{< ref "2014-12-08-crowdsourcing-month-an-overview.md" >}} "Crowdsourcing Month: An Overview"). - -On Wednesday, December 10, the [Challenge and Prize Community of Practice]({{< ref "challenges-prizes.md" >}} "Challenges & Prizes Community") hosted its quarterly in-person meeting to highlight the roles and responsibilities that [Challenge.gov](https://www.challenge.gov/), the Office of Science and Technology Policy (OSTP) at the White House, NASA’s Center of Excellence for Collaborative Innovation (CoECI) and federal agencies play in hosting and executing challenge and prize competitions. There were two panels of experienced managers that gave attendees insight into the institutionalization of prizes at their agencies and what makes-up a successful challenge. Finally, attendees were also able to garner exactly what they need to provide for the America COMPETES annual Congressional reporting. - -This event brought together more than 70 practitioners in-person and online, and we’ll be sharing the clips from the livestream in the very near future. Now here’s a look at our other events. - -{{< legacy-img src="2014/04/600-x-165-ChallengeGov-logo.jpg" alt="Full logo for Challenge.gov with the tagline: Government Challenges, Your Solutions." >}} - -## The New Challenge.gov - -[Challenge.gov added new features recently]({{< ref "2014-10-02-introducing-the-new-challenge-gov.md" >}} "Introducing the New Challenge.gov") and has pulled the full federal list and competition management tools onto the new platform. It now enables agencies to create and manage their competitions on a robust back-end platform. You can [learn more about the Challenge.gov platform](http://youtu.be/Yw58jVvppAw?list=PLd9b-GuOJ3nFeJeAHAn3Z5opohjxIw8OC) and [how to use Challenge.gov](https://www.youtube.com/watch?v=qXYar-2de44&index=1&list=PLd9b-GuOJ3nFeJeAHAn3Z5opohjxIw8OC) in these two webinars. - -## Open-Sourced Ideation Platform - -The Consumer Financial Protection Bureau (CFPB) also presented their [open sourced ideation tool, IdeaBox](http://youtu.be/KRQ24645LOE?list=PLd9b-GuOJ3nFeJeAHAn3Z5opohjxIw8OC), to the community in October of this year. This event showcased how you can build an innovation program by leveraging CFPB’s open-source ideation platform, replicating their lightweight staffing model, and using their playbook of resources. They use the platform to crowdsource ideas from employees across the agency. IdeaBox source code is shared openly on Github for anybody to use. - -## Getting Started - -But, if you are just getting started hosted some events that will jump-start your thinking and help you develop a plan of attack. First, we hosted **Cristin Dorgelo**, former Assistant Director for Grand Challenges at the Office of Science Technology and Policy at the White House, to give a [rundown of how challenge and prize initiatives can benefit your agency](http://youtu.be/Frwk3Fvw_H4?list=PLd9b-GuOJ3nFeJeAHAn3Z5opohjxIw8OC) and steps you need to take to get started. - -You may also be interested in watching [Why Your Challenge & Prize Competition Needs a Communication Strategy](http://youtu.be/wieTrYMT4zM?list=PLd9b-GuOJ3nFeJeAHAn3Z5opohjxIw8OC). In this event, learn why a robust communications plan is essential to a successful challenge, what methods of communication you should think about, what has worked and not worked for [The Desal Prize](http://www.securingwaterforfood.org/the-desal-prize/), and how you can structure your prize competition communication strategy. - -If you are thinking about launching a video competition then you may be interested in watching [Running a Successful Video Challenge](https://www.youtube.com/watch?v=kaK90anXf7w&index=7&list=PLd9b-GuOJ3nFeJeAHAn3Z5opohjxIw8OC). The presenter for this event, **Jason Crusan**, Director CoECI NASA, presents a case study of how NASA has used professional crowdsourcing for video creation. **Tammi Marcoullier**, Challenge.gov Program Manager, reviews getting from A to B, or how to decide what kind of video challenge you want to execute by examining your goals. - -Finally, you can take a look at the [summary of our event on Design Thinking](http://youtu.be/oLAtcfGCcdc?list=PLd9b-GuOJ3nFeJeAHAn3Z5opohjxIw8OC) and how this workshop helped folks working on challenge and prize competitions think through the design and execution of their challenge. Enjoy! For more events around challenge and prize competitions check out our [Events Calendar]({{< ref "/events" >}}). For questions about Challenge.gov or the [Challenge & Prize Community of Practice]({{< ref "/communities/challenges-prizes" >}}) email- Below are a few articles on optimizing title tags for search engines: -
- -- Nine Best Practices For Optimized < title > Tags -
- -- Title Tag -
- -
-
-
-
- If you’re interested in learning more about search, register for our Search Is the New Big Data (in-person training) on April 10. -
+--- +slug: plain-language-page-titles-more-important-than-ever +date: 2014-03-28 1:00:27 -0400 +title: 'Plain Language Page Titles: More Important than Ever' +summary: Government Web pages are found mainly through search engines. Google recently redesigned its search results page and there are quite a few small, but impactful, changes in this latest redesign. Specifically, it affects how page titles are displayed. Many experts now recommend even +authors: + - ammie-farraj-feijoo +topics: + - content + - writing + - big-data + - search-engine-optimization +--- + +[{{< legacy-img src="2014/03/DigitalGov-Search-screen-shot-600-x-485.jpg" alt="screen grab of DigitalGov Search in Google results page" >}}](https://s3.amazonaws.com/digitalgov/_legacy-img/2014/03/DigitalGov-Search-screen-shot-600-x-485.jpg)Government Web pages are found mainly through search engines. Google recently [redesigned its search results page](http://www.fastcodesign.com/3027704/how-googles-redesigned-search-results-augur-a-more-beautiful-web) and there are quite a few small, but impactful, changes in this latest redesign. Specifically, it affects how page titles are displayed. + +Many experts now recommend even shorter page titles. Below are a couple of articles (plus tools) to see how the change may affect your page titles: + +[Page Title & Meta Description By Pixel Width In SERP Snippet](http://www.screamingfrog.co.uk/page-title-meta-description-lengths-by-pixel-width/) + +[New Title Tag Guidelines & Preview Tool](http://moz.com/blog/new-title-tag-guidelines-preview-tool) + +In addition to the suggestions offered in our [previous article on Achieving Good SEO]({{< ref "2013-05-31-four-steps-to-achieve-good-seo.md" >}}), here are a few specific tips for page titles: + +1. Place page titles in a + ++ Below are a few articles on optimizing title tags for search engines: +
+ ++ Nine Best Practices For Optimized < title > Tags +
+ ++ Title Tag +
+ +
+
+
+
+ If you’re interested in learning more about search, register for our Search Is the New Big Data (in-person training) on April 10. +
diff --git a/content/news/2014/05/2014-05-19-sign-up-for-digitalgov-citizen-services-summit-friday-may-30.md b/content/news/2014/05/2014-05-19-sign-up-for-digitalgov-citizen-services-summit-friday-may-30.md index afe3ddd8ed..1f76c1638d 100644 --- a/content/news/2014/05/2014-05-19-sign-up-for-digitalgov-citizen-services-summit-friday-may-30.md +++ b/content/news/2014/05/2014-05-19-sign-up-for-digitalgov-citizen-services-summit-friday-may-30.md @@ -1,65 +1,65 @@ ---- -slug: sign-up-for-digitalgov-citizen-services-summit-friday-may-30 -date: 2014-05-19 3:03:16 -0400 -title: Sign up For DigitalGov Citizen Services Summit, Friday, May 30 -summary: 'We won’t build the government of the 21st century by drawing within the lines. We don’t have to tell you the hard work of building a digital government doesn’t exist in a vacuum or a bubble. Show us social media without mobile, Web without data and user experience without APIs. You can’t? That’s right—in reality,' -authors: - - rflagg - - jherman - - tammi-marcoullier - - jparcell - - apiazza - - jonathan-rubin -topics: - - api - - challenges - - content - - data - - product-management - - metrics - - mobile - - social-media - - user-experience - - DOL - - epa - - FCC - - federal-communications-commission - - GAO - - SBA - - us-department-of-labor - - us-environmental-protection-agency - - us-government-accountability-office - - us-small-business-administration ---- - -We won’t build the government of the 21st century by [drawing within the lines]({{< ref "2014-05-07-because-its-hard.md" >}}). - -We don’t have to tell you the hard work of building a digital government doesn’t exist in a vacuum or a bubble. Show us social media without mobile, Web without data and user experience without APIs. You can’t? That’s right—in reality, digital government intersects and cuts across boundaries every day in order to deliver the digital goods. - -To get ourselves thinking outside the lines in order to build an awesome 21st century government we’re bringing federal, industry and state and local employees together for the [DigitalGov Citizen Services Summit]({{< tmp"events/digitalgov-citizen-services-summit.md" >}}) on Friday, May 30. - -In our event’s panels and Expo, we’ll showcase programs that are combining, collaborating and colluding across technology boundaries to improve: - - * How agencies operate internally - * How agencies collaborate together - * How agencies engage with citizens - -Following an opening keynote by Federal Communications Commission (FCC) CIO, David Bray, our four panels will focus on: - - * performance analysis, - * customer service across channels, - * public private partnerships and - * inter-agency work. - -These panels will explore how agencies can integrate their [data]({{< ref "/topics/code" >}}data1/), [social media]({{< ref "/topics/social-media" >}}), [user experience]({{< ref "/topics/user-experience" >}}), [mobile development]({{< ref "/topics/mobile" >}}) and other programs in order to achieve the best improvements for citizen services. Confirmed speakers include: - - * Jack Bienko, Small Business Administration (SBA) - * Denise Shaw, Environmental Protection Agency (EPA) - * Sarah Kaczmarek, U.S. Government Accountability Office (GAO) - * Michael Pulsifer, Department of Labor (DOL) - -You can sign up for the event and see our [ever-expanding speaker list on the event page](https://www.google.com/url?q=https%3A%2F%2Fwww.digitalgov.gov%2Fevent%2Fdigitalgov-citizen-services-summit%2F&sa=D&sntz=1&usg=AFQjCNGiwao6z6PUtq_tcRPW1QVfhf-9WA). - -Our Expo will showcase innovations and shared services across the federal government. We have 30 tables available (and they’re going fast) for federal agencies to showcase projects and introduce yourselves to all the federal employees, contractors and state and local participants. Let us know your interest during the registration process … don’t think for a second that your program, large or small, isn’t the jam we are looking for - +--- +slug: sign-up-for-digitalgov-citizen-services-summit-friday-may-30 +date: 2014-05-19 3:03:16 -0400 +title: Sign up For DigitalGov Citizen Services Summit, Friday, May 30 +summary: 'We won’t build the government of the 21st century by drawing within the lines. We don’t have to tell you the hard work of building a digital government doesn’t exist in a vacuum or a bubble. Show us social media without mobile, Web without data and user experience without APIs. You can’t? That’s right—in reality,' +authors: + - rflagg + - jherman + - tammi-marcoullier + - jparcell + - apiazza + - jonathan-rubin +topics: + - api + - challenges + - content + - data + - product-management + - metrics + - mobile + - social-media + - user-experience + - DOL + - epa + - FCC + - federal-communications-commission + - GAO + - SBA + - us-department-of-labor + - us-environmental-protection-agency + - us-government-accountability-office + - us-small-business-administration +--- + +We won’t build the government of the 21st century by [drawing within the lines]({{< ref "2014-05-07-because-its-hard.md" >}}). + +We don’t have to tell you the hard work of building a digital government doesn’t exist in a vacuum or a bubble. Show us social media without mobile, Web without data and user experience without APIs. You can’t? That’s right—in reality, digital government intersects and cuts across boundaries every day in order to deliver the digital goods. + +To get ourselves thinking outside the lines in order to build an awesome 21st century government we’re bringing federal, industry and state and local employees together for the [DigitalGov Citizen Services Summit]({{< tmp"events/digitalgov-citizen-services-summit.md" >}}) on Friday, May 30. + +In our event’s panels and Expo, we’ll showcase programs that are combining, collaborating and colluding across technology boundaries to improve: + + * How agencies operate internally + * How agencies collaborate together + * How agencies engage with citizens + +Following an opening keynote by Federal Communications Commission (FCC) CIO, David Bray, our four panels will focus on: + + * performance analysis, + * customer service across channels, + * public private partnerships and + * inter-agency work. + +These panels will explore how agencies can integrate their [data]({{< ref "/topics/code" >}}data1/), [social media]({{< ref "/topics/social-media" >}}), [user experience]({{< ref "/topics/user-experience" >}}), [mobile development]({{< ref "/topics/mobile" >}}) and other programs in order to achieve the best improvements for citizen services. Confirmed speakers include: + + * Jack Bienko, Small Business Administration (SBA) + * Denise Shaw, Environmental Protection Agency (EPA) + * Sarah Kaczmarek, U.S. Government Accountability Office (GAO) + * Michael Pulsifer, Department of Labor (DOL) + +You can sign up for the event and see our [ever-expanding speaker list on the event page](https://www.google.com/url?q=https%3A%2F%2Fwww.digitalgov.gov%2Fevent%2Fdigitalgov-citizen-services-summit%2F&sa=D&sntz=1&usg=AFQjCNGiwao6z6PUtq_tcRPW1QVfhf-9WA). + +Our Expo will showcase innovations and shared services across the federal government. We have 30 tables available (and they’re going fast) for federal agencies to showcase projects and introduce yourselves to all the federal employees, contractors and state and local participants. Let us know your interest during the registration process … don’t think for a second that your program, large or small, isn’t the jam we are looking for + [Register Now]({{< tmp"events/digitalgov-citizen-services-summit.md" >}})! Seats are Limited! \ No newline at end of file diff --git a/content/news/2014/06/2014-06-03-harnessing-the-power-of-many-digitalgov-summit-panels-recap.md b/content/news/2014/06/2014-06-03-harnessing-the-power-of-many-digitalgov-summit-panels-recap.md index 45a8d5b079..8f3635c204 100644 --- a/content/news/2014/06/2014-06-03-harnessing-the-power-of-many-digitalgov-summit-panels-recap.md +++ b/content/news/2014/06/2014-06-03-harnessing-the-power-of-many-digitalgov-summit-panels-recap.md @@ -1,58 +1,58 @@ ---- -slug: harnessing-the-power-of-many-digitalgov-summit-panels-recap -date: 2014-06-03 15:14:11 -0400 -title: Harnessing the Power of Many—DigitalGov Summit Recap -summary: 'At the DigitalGov Citizen Services Summit last Friday, Jacob Parcell, Manager, Mobile Programs at the General Services Administration led a panel on the challenges and benefits of Inter-Agency work. The other panels were on performance analysis, customer service across channels, and public private partnerships. “The challenges are real,” said Parcell, who quoted President Obama’s famous salmon' -authors: - - alison-lemon -topics: - - challenges - - code - - content - - product-management - - metrics - - mobile - - social-media - - user-experience - - Census - - DOL - - epa - - fda - - NCI - - recaps - - us-department-of-labor - - us-environmental-protection-agency - - us-food-and-drug-administration - - united-states-census-bureau ---- - -{{< legacy-img src="2014/06/600-x-370-Jacob-Parcell-Panel-4-Inter-Agency-Work-toni470-flickr-20140530_114324.jpg" alt="Jacob Parcell, GSA - Panel 4: Inter-Agency Work - Alec Permison, Census; Lakshmi Grama, NCI; Denice Shaw, EPA; Mike Pulsifer, DOL" caption="" >}} - -At the [DigitalGov Citizen Services Summit]({{< ref "2014-05-30-digitalgov-citizen-services-summit-a-success.md" >}}) last Friday, Jacob Parcell, Manager, Mobile Programs at the General Services Administration led a panel on the challenges and benefits of Inter-Agency work. The other panels were on [performance analysis]({{< ref "2014-06-03-digitalgov-summit-panels-recap.md" >}}), [customer service across channels]({{< ref "2014-06-05-the-importance-of-cross-channel-customer-service-digitalgov-summit-recap.md" >}} "The Importance of Cross-Channel Customer Service—DigitalGov Summit Recap"), and [public private partnerships]({{< ref "2014-06-05-overcoming-barriers-digitalgov-summit-recap.md" >}} "Overcoming Barriers—DigitalGov Summit Recap"). - -“The challenges are real,” said Parcell, who quoted President Obama’s famous salmon quandary: “The Interior Department is in charge of salmon while they’re in fresh water, but the Commerce Department handles them when they’re in saltwater,” Obama said. “I hear it gets even more complicated once they’re smoked.” - -However, the benefits of Inter-Agency work can be enormous. The panel also tackled when to use a top-down versus a bottom-up approach and suggestions for improving inter-agency work. - -**Lakshmi Grama**, Senior Digital Content Strategist at the **National Cancer Institute**, knows this from her recent spearheading of a working group to create a content modeling solution. There was popular content all over government sites, from events to press releases. She knew it was not something that just one group could solve, so she harnessed the power of a 10 agency working group to produce two content models in just six months. “When people find value, they’ll work together.” - -**Alec Permison**, Applications Manager, Census.gov, at the **U.S. Census Bureau**, works on apps that pull data from multiple agencies about things like employment and economic indicators. A major challenge is that agency data can be in a many different formats. “We want an app that’s slick, but that still ensures the quality of the data.” The end product allows citizens to access information directly, “without waiting to hear it in the news.” - -One of the agencies that is used to supplying information is the **Department of Labor** (DOL). **Mike Pulsifer**, Lead IT Specialist at DOL said his agency has over 300 data sets that are public and available to developers. The have a commitment to using open source. - -**Denice Shaw**, Associate Chief Innovation Officer, Office of Research and Development, **Environmental Protection Agency**, knew the problem of nutrient pollution needed an unconventional solution. Many agencies needed the data, and although sensors existed to measure the problem, it was expensive to do so. However, by pulling together multiple stakeholders, including other agencies and academia they were able to lower the cost. - -### Top-down approach vs. a bottom-up approach - -According to Parcell, both approaches can work. When you use a bottom-up approach, “if you can find the things people are interested, you can get more people involved.” - -Grama says “the top-down approach is familiar. The top might not know about the details, they are more interested in the end product.” She also sees a lot of Web and social media folks using a bottom-up approach to figuring things out. The challenge is to articulate why it’s important to top management. - -### Improving inter-agency work - -What can we do improve the quality of inter-agency work? Grama thinks it would be beneficial for government workers to carve out time specifically to think about innovation. - -Agencies can also strive to think beyond their silos, since ultimately we work for the taxpayer. If you do work for another agency, “the taxpayer benefits even if your own agency doesn’t see the direct benefit,” said Pulsifer. - -What has been your experience with inter-agency work?_**Alison Lemon** is a [Knowledge Manager for the SocialGov Community](FIND?s=alison+lemon.md) and a Senior Analyst for Social Media with the **FDA’s Office of Women’s Health**._ +--- +slug: harnessing-the-power-of-many-digitalgov-summit-panels-recap +date: 2014-06-03 15:14:11 -0400 +title: Harnessing the Power of Many—DigitalGov Summit Recap +summary: 'At the DigitalGov Citizen Services Summit last Friday, Jacob Parcell, Manager, Mobile Programs at the General Services Administration led a panel on the challenges and benefits of Inter-Agency work. The other panels were on performance analysis, customer service across channels, and public private partnerships. “The challenges are real,” said Parcell, who quoted President Obama’s famous salmon' +authors: + - alison-lemon +topics: + - challenges + - code + - content + - product-management + - metrics + - mobile + - social-media + - user-experience + - Census + - DOL + - epa + - fda + - NCI + - recaps + - us-department-of-labor + - us-environmental-protection-agency + - us-food-and-drug-administration + - united-states-census-bureau +--- + +{{< legacy-img src="2014/06/600-x-370-Jacob-Parcell-Panel-4-Inter-Agency-Work-toni470-flickr-20140530_114324.jpg" alt="Jacob Parcell, GSA - Panel 4: Inter-Agency Work - Alec Permison, Census; Lakshmi Grama, NCI; Denice Shaw, EPA; Mike Pulsifer, DOL" caption="" >}} + +At the [DigitalGov Citizen Services Summit]({{< ref "2014-05-30-digitalgov-citizen-services-summit-a-success.md" >}}) last Friday, Jacob Parcell, Manager, Mobile Programs at the General Services Administration led a panel on the challenges and benefits of Inter-Agency work. The other panels were on [performance analysis]({{< ref "2014-06-03-digitalgov-summit-panels-recap.md" >}}), [customer service across channels]({{< ref "2014-06-05-the-importance-of-cross-channel-customer-service-digitalgov-summit-recap.md" >}} "The Importance of Cross-Channel Customer Service—DigitalGov Summit Recap"), and [public private partnerships]({{< ref "2014-06-05-overcoming-barriers-digitalgov-summit-recap.md" >}} "Overcoming Barriers—DigitalGov Summit Recap"). + +“The challenges are real,” said Parcell, who quoted President Obama’s famous salmon quandary: “The Interior Department is in charge of salmon while they’re in fresh water, but the Commerce Department handles them when they’re in saltwater,” Obama said. “I hear it gets even more complicated once they’re smoked.” + +However, the benefits of Inter-Agency work can be enormous. The panel also tackled when to use a top-down versus a bottom-up approach and suggestions for improving inter-agency work. + +**Lakshmi Grama**, Senior Digital Content Strategist at the **National Cancer Institute**, knows this from her recent spearheading of a working group to create a content modeling solution. There was popular content all over government sites, from events to press releases. She knew it was not something that just one group could solve, so she harnessed the power of a 10 agency working group to produce two content models in just six months. “When people find value, they’ll work together.” + +**Alec Permison**, Applications Manager, Census.gov, at the **U.S. Census Bureau**, works on apps that pull data from multiple agencies about things like employment and economic indicators. A major challenge is that agency data can be in a many different formats. “We want an app that’s slick, but that still ensures the quality of the data.” The end product allows citizens to access information directly, “without waiting to hear it in the news.” + +One of the agencies that is used to supplying information is the **Department of Labor** (DOL). **Mike Pulsifer**, Lead IT Specialist at DOL said his agency has over 300 data sets that are public and available to developers. The have a commitment to using open source. + +**Denice Shaw**, Associate Chief Innovation Officer, Office of Research and Development, **Environmental Protection Agency**, knew the problem of nutrient pollution needed an unconventional solution. Many agencies needed the data, and although sensors existed to measure the problem, it was expensive to do so. However, by pulling together multiple stakeholders, including other agencies and academia they were able to lower the cost. + +### Top-down approach vs. a bottom-up approach + +According to Parcell, both approaches can work. When you use a bottom-up approach, “if you can find the things people are interested, you can get more people involved.” + +Grama says “the top-down approach is familiar. The top might not know about the details, they are more interested in the end product.” She also sees a lot of Web and social media folks using a bottom-up approach to figuring things out. The challenge is to articulate why it’s important to top management. + +### Improving inter-agency work + +What can we do improve the quality of inter-agency work? Grama thinks it would be beneficial for government workers to carve out time specifically to think about innovation. + +Agencies can also strive to think beyond their silos, since ultimately we work for the taxpayer. If you do work for another agency, “the taxpayer benefits even if your own agency doesn’t see the direct benefit,” said Pulsifer. + +What has been your experience with inter-agency work?_**Alison Lemon** is a [Knowledge Manager for the SocialGov Community](FIND?s=alison+lemon.md) and a Senior Analyst for Social Media with the **FDA’s Office of Women’s Health**._ _Thanks to our special Summit blogger, Alison, who took up the Open Opportunities challenge. You can [find more opportunities to participate](http://gsablogs.gsa.gov/dsic/category/open-opportunities/)._ \ No newline at end of file diff --git a/content/news/2014/11/2014-11-04-a-picture-is-worth-a-thousand-tokens-part-ii.md b/content/news/2014/11/2014-11-04-a-picture-is-worth-a-thousand-tokens-part-ii.md index 61ab82a2dd..c0ca954432 100644 --- a/content/news/2014/11/2014-11-04-a-picture-is-worth-a-thousand-tokens-part-ii.md +++ b/content/news/2014/11/2014-11-04-a-picture-is-worth-a-thousand-tokens-part-ii.md @@ -1,290 +1,290 @@ ---- -slug: a-picture-is-worth-a-thousand-tokens-part-ii -date: 2014-11-04 10:00:48 -0400 -title: 'A Picture Is Worth a Thousand Tokens: Part II' -summary: 'In the first part of A Picture Is Worth a Thousand Tokens, I explained why we built a social media-driven image search engine, and specifically how we used Elasticsearch to build its first iteration. In this week’s post, I’ll take a deep dive into how we worked to improve relevancy, recall, and the searcher’s experience' -authors: - - loren-siebert -topics: - - content - - our-work - - social-media - - instagram - - open-government - - usagov ---- - -In the first part of [_A Picture Is Worth a Thousand Tokens_]({{< ref "2014-10-28-a-picture-is-worth-a-thousand-tokens.md" >}} "A Picture Is Worth a Thousand Tokens"), I explained why we built a social media-driven image search engine, and specifically how we used Elasticsearch to build its first iteration. In this week’s post, I’ll take a deep dive into how we worked to improve relevancy, recall, and the searcher’s experience as a whole. - -## Redefine Recency - -To solve the scoring problem on older photos for archival photostreams, we decided that after some amount of time, say six weeks, we no longer wanted to keep decaying the relevancy on photos. To put that into effect, we modified the functions in the function score like this: - -[{{< legacy-img src="2014/10/600-x-186-tokens-Part-2-Redefine-Recency-code.jpg" alt="600-x-186-tokens-Part-2-Redefine-Recency-code" >}}](https://gist.github.com/loren/df85de9536216ae32b19) - -Now we only apply the Gaussian decay for photos taken in the last six weeks or so. Anything older than that gets a constant decay or negative boost equal to what it would be if the photo were about six weeks old. So rather than having the decay factor continue on down to zero, we stop it at around 0.12. For all those Civil War photos in the Library of Congress’ photostream, the date ends up being factored out of the relevancy equation and they are judged solely on their similarity score and their popularity. - -## Recognize Proximity - -To rank “County event in Jefferson Memorial” higher than “Memorial event in Jefferson County” on a search for _jefferson memorial_, the simplest way to handle it was to use a [match_phrase query](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-match-query.html#_phrase) to make the proximity of the terms a nice-to-have signal that could be factored into the overall score. The updated boolean clause matches on the phrase like this: - -[{{< legacy-img src="2014/10/600-x-186-tokens-Part-2-Recognize-Proximity-code.jpg" alt="600-x-186-tokens-Part-2-Recognize-Proximity-code" >}}](https://gist.github.com/loren/7741c52bd8e74d7ef626) - -## Account for Misspellings - -We already knew from prior projects that we’d get a lot of misspelled search terms, but we put off implementing spelling suggestions and overrides until we’d rolled out our minimum viable product in our first iteration. - -Misspelled search terms can be handled in different ways depending on your corpus and your tolerance for false positives. This shows one way of thinking about it: - -A visitor searches for _jeferson memorial_ (sic). - -Perform search with misspelled term. - -Are there any results at all for the misspelled _jeferson memorial_? - -> Show them. - -> Can we suggest a similar query that yields **more** results from our indexes (such as _jefferson memorial_)? - -> Surface suggestion above results: “Did you mean _jefferson memorial_?” - -Can we find a similar query that would yield **any** results? - -> Perform search with that new overridden corrected term. - -> Surface override above results: “We’re showing results for _jefferson memorial_.” - -The problem with suggesting a “better” search term than what the visitor typed is that it’s easy to get false positives that vary from hilarious to embarrassing: - - * You searched on _president obama_. Did you mean _obama precedent_? - * You searched on _correspondents dinner_. Did you mean _correspondence dinner_? - * You searched on _civil rights_. Did you mean _civil right_? - * You searched on _better america_. Did you mean _bitter america_? - -OK, that last one didn’t really happen, but it could have, so we put that particular problem on the back shelf and instead focused on handling cases where the visitor’s search as typed didn’t return any results from our indexes but a slight variation on the query did. To do this, we introduced a new field to the indexes called “bigram” based on a [shingle token filter](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-shingle-tokenfilter.html#analysis-shingle-tokenfilter) we called “bigram_filter.” - -The Elasticsearch settings got modified like this: - -{ - "filter": { - "bigram_filter": { - "type": "shingle" - }, - …. - } -}- -The properties in the Flickr and Instagram index mappings got modified as well. - -Flickr: - -[{{< legacy-img src="2014/10/600-x-186-tokens-Part-2-flickr-code.jpg" alt="600-x-186-tokens-Part-2-flickr-code" >}}](https://gist.github.com/loren/f08c3e2c97e7773e432e) - -Instagram: - -[{{< legacy-img src="2014/10/600-x-186-tokens-Part-2-instagram-code.jpg" alt="600-x-186-tokens-Part-2-instagram-code" >}}](https://gist.github.com/loren/89a80170b14714f074c2) - -This populates the bigram field for each index with whatever natural language fields it might have. For Instagram, it’s just the caption field, but Flickr has title and description so these are essentially appended together as they are copied into the bigram field. In both cases, they are analyzed with the shingle filter which creates bigrams out of the text. The clause of the query that generates the suggestion looks like this: - -
{ - "suggest": { - "text": "jeferson memorial", - "suggestion": { - "phrase": { - "analyzer": "bigram_analyzer", - "field": "bigram", - "size": 1, - "direct_generator": [ - { - "field": "bigram", - "prefix_len": 1 - } - ], - "highlight": { - "pre_tag": "", - "post_tag": "<\/strong>" - } - } - } - } -}- - -
- We only care about the top suggestion, and we’re willing to take the small performance penalty of using just the first letter of the search term as the starting point for the suggestion rather than the default two-character prefix. -
- - -- Here’s an example of how bigrams really help generate relevant multi-word suggestions. -
- - -- An image search on USA.gov for correspondence generates lots of results. Misspell it and search on correspondense and it works as you might expect, showing results for correspondence. -
- - -- But now when you search on correspondense dinner, you get results for correspondents dinner. It correctly recommends correspondents dinner even though correspondence has a higher term frequency than correspondents does. -
- - -- Bigrams (word pairs) let us generate phrase suggestions rather than term suggestions by giving the suggester some collocation information. This increases the likelihood of a good suggestion for a multi-word search query when there are multiple possibilities for each individual word in the query. -
- - -- Most of the near-duplicate photo problems came from Flickr profiles. Flickr has the notion of an album, so we thought we could take advantage of this and save ourselves a lot of work building a classifier. Even if retrieving a photo’s albums (they can belong to many) from the Flickr API had been straightforward, it would still not have helped as some albums contain thousands of very different photos. Some of the Library of Congress albums on Flickr have over 10,000 photos, all with very different titles and descriptions. -
- - -- As we were already using Elasticsearch to do everything else, we wondered if it could also help us group photos into albums and then return just the most relevant photo from each album in the search results. The answer turned out to be “yes” on both fronts by using the more_like_this query as a starting point for classification and the top_hits aggregation to pluck the best photos from each album. -
- - -- First we added an unanalyzed “album” field to the mappings on each index: -
- - -{ - "album": { - "type": "string", - "index": "not_analyzed" - } -}- - -
- Then we established some criteria to describe when two photos should be considered part of the same album: -
- - -- For a given Flickr photo with ID #12345, this query finds other Flickr photos from the same Flickr user profile “flickr_user_1@n02” also taken on April 23rd, 2012 that could potentially be grouped into the same album: -
- - - - - -- The filter part of this query is straightforward, as it’s just enforcing two of the criteria we established for classifying photos. The more_like_this (MLT) part is actually broken down into multiple pieces, each with its own parameters, and wrapped up in a boolean clause. For all of the MLT queries, we set the minimum term frequency to 1 as a given term may only show up once in any particular field. The max_query_terms parameter is raised up really high to 500 terms, as sometimes a field can have that many terms in it and we want to take them all into account. From there, we just used some trial and error to see what percent_terms_to_match threshold to use for each field. -
- - -- The aggregation on the raw document scores came about after looking at the distribution of relevancy scores from the MLT query. Often, some group of, say, 100 photos would be pretty similar to a given photo, but the distribution of scores would be clumped around a few scores. Perhaps 60 photos would have an identical score of 4.5 and another 20 would have the same score of 4.4, and next group down would have a few clumped much lower at 0.6 and then the remainder would have different but all very low scores. The photos that ended up with the same scores to each other tended to have identical metadata. Usually the first two buckets from the aggregations would have very similar scores, so we assigned all of those photos to the same Elasticsearch album. -
- - -- Now that we had some notion of an album, we needed to pick the most relevant photo from each album and then sort all of those top picks by their relevancy scores to generate the actual search results. And don’t forget, we could be searching across hundreds of thousands of albums spanning hundreds of Flickr and Instagram profiles, and we still need to take each photo’s dynamic recency and popularity into account and then blend the results from both Flickr and Instagram indexes. And ideally, all this should happen within a few dozen milliseconds. It seems like an awfully tall order but the top_hits query made it pretty simple. The filtered query part of our request remained the same. We just added a nested aggregation to bucket by album and then pick the top hit from each album: -
- - -{ - "aggs": { - "album_agg": { - "terms": { - "field": "album", - "order": { - "top_score": "desc" - } - }, - "aggs": { - "top_image_hits": { - "top_hits": { - "size": 1 - } - }, - "top_score": { - "max": { - "script": "_doc.score" - } - } - } - } - } -} -- - -
- We changed the type of query to the more efficient search_count, as we no longer needed “hits”. We are only looking at the aggregation buckets now. -
- - --- - -- GET http://localhost:9200/development-asis-flickr_photos,development-asis-instagram_photos/_search?search_type=count&size=0 -
-
- Like any fuzzy matching solution, this album classification strategy is practically guaranteed to both under-classify photos that should be in the same album as well as over-classify photos that should be kept separate. But we were pretty confident that the search experience had improved, and were impressed with how easy Elasticsearch made it to pull a solution together. -
- - -- One downside is that the aggregation query is more CPU and memory intensive than the more typical “hits” query we had before, but we still get results in well under 100ms and we haven’t done anything to optimize it yet. The other problem we created with these aggregated results centered around pagination. If you request 10 results from the API, the 10 photos you get may each come from a different album, and each album may have thousands of photos. So the 10th photo might actually have been the 10,000th “hit”. And while it’s easy for Elasticsearch to tell you how many total hits were found, currently there’s no cheap way of knowing how many potential buckets you’ll have in an aggregation unless you go and compute them all, and that can lead to both memory problems and wasted CPU. -
- - -- Although Elasticsearch defaults to five shards per index, we put each image index in just one shard. As we are relying so heavily on relevance across potentially small populations of photos, we wanted the results to be as accurate as possible (see Elasticsearch’s Relevance Is Broken!). -
- - -- With just a million photos in our initial index this is not a problem, but a billion photos will require the sort of horizontal scaling that Elasticsearch is known for. Changing the number of shards will require a full reindex. We also update our synonyms from time to time, and that requires reindexing, too. To accommodate this without any downtime, we use index aliases. We spin up a new index in the background, populate it with stream2es, and just adjust the alias on the running system in real-time. As the number of shards grows, we can experiment with routing the indexing and the queries to hit the same shards. -
- - -- Many Elasticsearch articles involve closed proprietary systems that cannot be fully shared with the rest of the world. With ASIS, we’ve taken a different approach and published the entire codebase along with this explanation of how we went about building it and the decisions (good and bad) we made along the way. This stemmed from our commitment to transparency and open government, and we’d also like others to be able to fork the ASIS codebase and either help improve it or perhaps just use it to build their own image search engine. +--- +slug: a-picture-is-worth-a-thousand-tokens-part-ii +date: 2014-11-04 10:00:48 -0400 +title: 'A Picture Is Worth a Thousand Tokens: Part II' +summary: 'In the first part of A Picture Is Worth a Thousand Tokens, I explained why we built a social media-driven image search engine, and specifically how we used Elasticsearch to build its first iteration. In this week’s post, I’ll take a deep dive into how we worked to improve relevancy, recall, and the searcher’s experience' +authors: + - loren-siebert +topics: + - content + - our-work + - social-media + - instagram + - open-government + - usagov +--- + +In the first part of [_A Picture Is Worth a Thousand Tokens_]({{< ref "2014-10-28-a-picture-is-worth-a-thousand-tokens.md" >}} "A Picture Is Worth a Thousand Tokens"), I explained why we built a social media-driven image search engine, and specifically how we used Elasticsearch to build its first iteration. In this week’s post, I’ll take a deep dive into how we worked to improve relevancy, recall, and the searcher’s experience as a whole. + +## Redefine Recency + +To solve the scoring problem on older photos for archival photostreams, we decided that after some amount of time, say six weeks, we no longer wanted to keep decaying the relevancy on photos. To put that into effect, we modified the functions in the function score like this: + +[{{< legacy-img src="2014/10/600-x-186-tokens-Part-2-Redefine-Recency-code.jpg" alt="600-x-186-tokens-Part-2-Redefine-Recency-code" >}}](https://gist.github.com/loren/df85de9536216ae32b19) + +Now we only apply the Gaussian decay for photos taken in the last six weeks or so. Anything older than that gets a constant decay or negative boost equal to what it would be if the photo were about six weeks old. So rather than having the decay factor continue on down to zero, we stop it at around 0.12. For all those Civil War photos in the Library of Congress’ photostream, the date ends up being factored out of the relevancy equation and they are judged solely on their similarity score and their popularity. + +## Recognize Proximity + +To rank “County event in Jefferson Memorial” higher than “Memorial event in Jefferson County” on a search for _jefferson memorial_, the simplest way to handle it was to use a [match_phrase query](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-match-query.html#_phrase) to make the proximity of the terms a nice-to-have signal that could be factored into the overall score. The updated boolean clause matches on the phrase like this: + +[{{< legacy-img src="2014/10/600-x-186-tokens-Part-2-Recognize-Proximity-code.jpg" alt="600-x-186-tokens-Part-2-Recognize-Proximity-code" >}}](https://gist.github.com/loren/7741c52bd8e74d7ef626) + +## Account for Misspellings + +We already knew from prior projects that we’d get a lot of misspelled search terms, but we put off implementing spelling suggestions and overrides until we’d rolled out our minimum viable product in our first iteration. + +Misspelled search terms can be handled in different ways depending on your corpus and your tolerance for false positives. This shows one way of thinking about it: + +A visitor searches for _jeferson memorial_ (sic). + +Perform search with misspelled term. + +Are there any results at all for the misspelled _jeferson memorial_? + +> Show them. + +> Can we suggest a similar query that yields **more** results from our indexes (such as _jefferson memorial_)? + +> Surface suggestion above results: “Did you mean _jefferson memorial_?” + +Can we find a similar query that would yield **any** results? + +> Perform search with that new overridden corrected term. + +> Surface override above results: “We’re showing results for _jefferson memorial_.” + +The problem with suggesting a “better” search term than what the visitor typed is that it’s easy to get false positives that vary from hilarious to embarrassing: + + * You searched on _president obama_. Did you mean _obama precedent_? + * You searched on _correspondents dinner_. Did you mean _correspondence dinner_? + * You searched on _civil rights_. Did you mean _civil right_? + * You searched on _better america_. Did you mean _bitter america_? + +OK, that last one didn’t really happen, but it could have, so we put that particular problem on the back shelf and instead focused on handling cases where the visitor’s search as typed didn’t return any results from our indexes but a slight variation on the query did. To do this, we introduced a new field to the indexes called “bigram” based on a [shingle token filter](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-shingle-tokenfilter.html#analysis-shingle-tokenfilter) we called “bigram_filter.” + +The Elasticsearch settings got modified like this: + +
{ + "filter": { + "bigram_filter": { + "type": "shingle" + }, + …. + } +}+ +The properties in the Flickr and Instagram index mappings got modified as well. + +Flickr: + +[{{< legacy-img src="2014/10/600-x-186-tokens-Part-2-flickr-code.jpg" alt="600-x-186-tokens-Part-2-flickr-code" >}}](https://gist.github.com/loren/f08c3e2c97e7773e432e) + +Instagram: + +[{{< legacy-img src="2014/10/600-x-186-tokens-Part-2-instagram-code.jpg" alt="600-x-186-tokens-Part-2-instagram-code" >}}](https://gist.github.com/loren/89a80170b14714f074c2) + +This populates the bigram field for each index with whatever natural language fields it might have. For Instagram, it’s just the caption field, but Flickr has title and description so these are essentially appended together as they are copied into the bigram field. In both cases, they are analyzed with the shingle filter which creates bigrams out of the text. The clause of the query that generates the suggestion looks like this: + +
{ + "suggest": { + "text": "jeferson memorial", + "suggestion": { + "phrase": { + "analyzer": "bigram_analyzer", + "field": "bigram", + "size": 1, + "direct_generator": [ + { + "field": "bigram", + "prefix_len": 1 + } + ], + "highlight": { + "pre_tag": "", + "post_tag": "<\/strong>" + } + } + } + } +}+ + +
+ We only care about the top suggestion, and we’re willing to take the small performance penalty of using just the first letter of the search term as the starting point for the suggestion rather than the default two-character prefix. +
+ + ++ Here’s an example of how bigrams really help generate relevant multi-word suggestions. +
+ + ++ An image search on USA.gov for correspondence generates lots of results. Misspell it and search on correspondense and it works as you might expect, showing results for correspondence. +
+ + ++ But now when you search on correspondense dinner, you get results for correspondents dinner. It correctly recommends correspondents dinner even though correspondence has a higher term frequency than correspondents does. +
+ + ++ Bigrams (word pairs) let us generate phrase suggestions rather than term suggestions by giving the suggester some collocation information. This increases the likelihood of a good suggestion for a multi-word search query when there are multiple possibilities for each individual word in the query. +
+ + ++ Most of the near-duplicate photo problems came from Flickr profiles. Flickr has the notion of an album, so we thought we could take advantage of this and save ourselves a lot of work building a classifier. Even if retrieving a photo’s albums (they can belong to many) from the Flickr API had been straightforward, it would still not have helped as some albums contain thousands of very different photos. Some of the Library of Congress albums on Flickr have over 10,000 photos, all with very different titles and descriptions. +
+ + ++ As we were already using Elasticsearch to do everything else, we wondered if it could also help us group photos into albums and then return just the most relevant photo from each album in the search results. The answer turned out to be “yes” on both fronts by using the more_like_this query as a starting point for classification and the top_hits aggregation to pluck the best photos from each album. +
+ + ++ First we added an unanalyzed “album” field to the mappings on each index: +
+ + +{ + "album": { + "type": "string", + "index": "not_analyzed" + } +}+ + +
+ Then we established some criteria to describe when two photos should be considered part of the same album: +
+ + ++ For a given Flickr photo with ID #12345, this query finds other Flickr photos from the same Flickr user profile “flickr_user_1@n02” also taken on April 23rd, 2012 that could potentially be grouped into the same album: +
+ + + + + ++ The filter part of this query is straightforward, as it’s just enforcing two of the criteria we established for classifying photos. The more_like_this (MLT) part is actually broken down into multiple pieces, each with its own parameters, and wrapped up in a boolean clause. For all of the MLT queries, we set the minimum term frequency to 1 as a given term may only show up once in any particular field. The max_query_terms parameter is raised up really high to 500 terms, as sometimes a field can have that many terms in it and we want to take them all into account. From there, we just used some trial and error to see what percent_terms_to_match threshold to use for each field. +
+ + ++ The aggregation on the raw document scores came about after looking at the distribution of relevancy scores from the MLT query. Often, some group of, say, 100 photos would be pretty similar to a given photo, but the distribution of scores would be clumped around a few scores. Perhaps 60 photos would have an identical score of 4.5 and another 20 would have the same score of 4.4, and next group down would have a few clumped much lower at 0.6 and then the remainder would have different but all very low scores. The photos that ended up with the same scores to each other tended to have identical metadata. Usually the first two buckets from the aggregations would have very similar scores, so we assigned all of those photos to the same Elasticsearch album. +
+ + ++ Now that we had some notion of an album, we needed to pick the most relevant photo from each album and then sort all of those top picks by their relevancy scores to generate the actual search results. And don’t forget, we could be searching across hundreds of thousands of albums spanning hundreds of Flickr and Instagram profiles, and we still need to take each photo’s dynamic recency and popularity into account and then blend the results from both Flickr and Instagram indexes. And ideally, all this should happen within a few dozen milliseconds. It seems like an awfully tall order but the top_hits query made it pretty simple. The filtered query part of our request remained the same. We just added a nested aggregation to bucket by album and then pick the top hit from each album: +
+ + +{ + "aggs": { + "album_agg": { + "terms": { + "field": "album", + "order": { + "top_score": "desc" + } + }, + "aggs": { + "top_image_hits": { + "top_hits": { + "size": 1 + } + }, + "top_score": { + "max": { + "script": "_doc.score" + } + } + } + } + } +} ++ + +
+ We changed the type of query to the more efficient search_count, as we no longer needed “hits”. We are only looking at the aggregation buckets now. +
+ + +++ + ++ GET http://localhost:9200/development-asis-flickr_photos,development-asis-instagram_photos/_search?search_type=count&size=0 +
+
+ Like any fuzzy matching solution, this album classification strategy is practically guaranteed to both under-classify photos that should be in the same album as well as over-classify photos that should be kept separate. But we were pretty confident that the search experience had improved, and were impressed with how easy Elasticsearch made it to pull a solution together. +
+ + ++ One downside is that the aggregation query is more CPU and memory intensive than the more typical “hits” query we had before, but we still get results in well under 100ms and we haven’t done anything to optimize it yet. The other problem we created with these aggregated results centered around pagination. If you request 10 results from the API, the 10 photos you get may each come from a different album, and each album may have thousands of photos. So the 10th photo might actually have been the 10,000th “hit”. And while it’s easy for Elasticsearch to tell you how many total hits were found, currently there’s no cheap way of knowing how many potential buckets you’ll have in an aggregation unless you go and compute them all, and that can lead to both memory problems and wasted CPU. +
+ + ++ Although Elasticsearch defaults to five shards per index, we put each image index in just one shard. As we are relying so heavily on relevance across potentially small populations of photos, we wanted the results to be as accurate as possible (see Elasticsearch’s Relevance Is Broken!). +
+ + ++ With just a million photos in our initial index this is not a problem, but a billion photos will require the sort of horizontal scaling that Elasticsearch is known for. Changing the number of shards will require a full reindex. We also update our synonyms from time to time, and that requires reindexing, too. To accommodate this without any downtime, we use index aliases. We spin up a new index in the background, populate it with stream2es, and just adjust the alias on the running system in real-time. As the number of shards grows, we can experiment with routing the indexing and the queries to hit the same shards. +
+ + ++ Many Elasticsearch articles involve closed proprietary systems that cannot be fully shared with the rest of the world. With ASIS, we’ve taken a different approach and published the entire codebase along with this explanation of how we went about building it and the decisions (good and bad) we made along the way. This stemmed from our commitment to transparency and open government, and we’d also like others to be able to fork the ASIS codebase and either help improve it or perhaps just use it to build their own image search engine.
\ No newline at end of file diff --git a/content/news/2014/11/2014-11-07-welcome-to-user-experience-month.md b/content/news/2014/11/2014-11-07-welcome-to-user-experience-month.md index 4704f163ca..cd82db2fde 100644 --- a/content/news/2014/11/2014-11-07-welcome-to-user-experience-month.md +++ b/content/news/2014/11/2014-11-07-welcome-to-user-experience-month.md @@ -1,36 +1,36 @@ ---- -slug: welcome-to-user-experience-month -date: 2014-11-07 12:00:33 -0400 -title: Welcome to User Experience Month! -summary: 'One challenge with digital government: it’s hard to see people. If you work at a U.S. Post Office, you interact with your customers, talk with them, and even see what they are feeling by looking at their faces. You can understand their experience fairly easily. In the digital world, technology decreases physical distance but increases' -authors: - - jonathan-rubin -topics: - - monthly-theme - - cx - - digitalgov-user-experience-program - - user-experience - ---- - -{{< legacy-img src="2014/11/600-x-250-UX-monthly-theme-slider-by-Jessica-Skretch-FTCgov.jpg" alt="Jessica Skretch, FTC" caption="" >}} - -One challenge with digital government: it’s hard to see people. - -If you work at a U.S. Post Office, you interact with your customers, talk with them, and even see what they are feeling by looking at their faces. You can understand their experience fairly easily. In the digital world, technology decreases physical distance but increases the personal distance between us and our audience. Often we have to make sense of piles of data and user comments to determine if people even like what we offer or find it valuable. - -So, in addition to collecting good analytics (like through GSA’s free [Digital Analytics Program]({{< ref "/guides/dap/_index.md" >}} "DAP: Digital Analytics Program")), it’s crucial to understand your how your customers use your technology on a one-to-one basis. That’s why you focus on the User Experience (or UX); a product’s ease-of-use, whether it looks nice or creates any emotional friction, and if people can use it to accomplish something they want. - -User Experience is closely related to [Customer Experience]({{< ref "2014-07-07-user-experience-ux-vs-customer-experience-cx-whats-the-dif.md" >}} "User Experience (UX) vs. Customer Experience (CX): What’s the Dif?"), and the [User Experience]({{< ref "digitalgov-user-experience-resources.md" >}} "DigitalGov User Experience Program") program that I manage at GSA helps: build UX teams at agencies across the federal government, them to understand their customers’ needs, and build products centered around them. - -For this month’s UX theme, we’re hitting this topic from lots of angles: - - * The exciting [results from our Federal User Experience research study]({{< ref "2014-11-21-results-2014-federal-user-experience-survey.md" >}} "Results: 2014 Federal User Experience Survey") - * [How Plain Language saved a Department of Education website]({{< ref "2014-11-14-institute-for-education-sciences-usability-case-study.md" >}} "Institute for Education Sciences – Usability Case Study") - * See great [recorded presentations about User Experience by DigitalGov University]({{< ref "2014-11-26-usability-events-round-up-2014.md" >}} "Usability Events Round-Up: 2014") - * How [Accessibility and Usability are similar]({{< ref "2014-11-17-user-experience-impossible-the-line-between-accessibility-and-usability.md" >}} "User Experience Impossible: The Line Between Accessibility and Usability") (and different) - * (We all love surveys.) [Here’s how to avoid making a bad one]({{< ref "2014-11-10-4-tips-on-great-survey-design.md" >}} "4 Tips on Great Survey Design") - * Why [slow load times can crush your Responsive Web Design implementation]({{< ref "2014-11-18-trends-on-tuesday-speed-matters-when-measuring-responsive-web-design-performance-load-times.md" >}} "Trends on Tuesday: Speed Matters When Measuring Responsive Web Design Performance Load Times") - * How to ensure people use your site search? Here’s [one important thing NOT to do]({{< ref "2014-11-24-placeholder-text-think-outside-the-box.md" >}} "Placeholder Text: Think Outside the Box") - -Finally, if you want to get involved with the 530+ members of the [Federal User Experience Community]({{< ref "communities/user-experience.md" >}} "Federal User Experience Community"), please [email us](mailto:UXgov@gsa.gov) and we’ll get you signed up. +--- +slug: welcome-to-user-experience-month +date: 2014-11-07 12:00:33 -0400 +title: Welcome to User Experience Month! +summary: 'One challenge with digital government: it’s hard to see people. If you work at a U.S. Post Office, you interact with your customers, talk with them, and even see what they are feeling by looking at their faces. You can understand their experience fairly easily. In the digital world, technology decreases physical distance but increases' +authors: + - jonathan-rubin +topics: + - monthly-theme + - cx + - digitalgov-user-experience-program + - user-experience + +--- + +{{< legacy-img src="2014/11/600-x-250-UX-monthly-theme-slider-by-Jessica-Skretch-FTCgov.jpg" alt="Jessica Skretch, FTC" caption="" >}} + +One challenge with digital government: it’s hard to see people. + +If you work at a U.S. Post Office, you interact with your customers, talk with them, and even see what they are feeling by looking at their faces. You can understand their experience fairly easily. In the digital world, technology decreases physical distance but increases the personal distance between us and our audience. Often we have to make sense of piles of data and user comments to determine if people even like what we offer or find it valuable. + +So, in addition to collecting good analytics (like through GSA’s free [Digital Analytics Program]({{< ref "/guides/dap/_index.md" >}} "DAP: Digital Analytics Program")), it’s crucial to understand your how your customers use your technology on a one-to-one basis. That’s why you focus on the User Experience (or UX); a product’s ease-of-use, whether it looks nice or creates any emotional friction, and if people can use it to accomplish something they want. + +User Experience is closely related to [Customer Experience]({{< ref "2014-07-07-user-experience-ux-vs-customer-experience-cx-whats-the-dif.md" >}} "User Experience (UX) vs. Customer Experience (CX): What’s the Dif?"), and the [User Experience]({{< ref "digitalgov-user-experience-resources.md" >}} "DigitalGov User Experience Program") program that I manage at GSA helps: build UX teams at agencies across the federal government, them to understand their customers’ needs, and build products centered around them. + +For this month’s UX theme, we’re hitting this topic from lots of angles: + + * The exciting [results from our Federal User Experience research study]({{< ref "2014-11-21-results-2014-federal-user-experience-survey.md" >}} "Results: 2014 Federal User Experience Survey") + * [How Plain Language saved a Department of Education website]({{< ref "2014-11-14-institute-for-education-sciences-usability-case-study.md" >}} "Institute for Education Sciences – Usability Case Study") + * See great [recorded presentations about User Experience by DigitalGov University]({{< ref "2014-11-26-usability-events-round-up-2014.md" >}} "Usability Events Round-Up: 2014") + * How [Accessibility and Usability are similar]({{< ref "2014-11-17-user-experience-impossible-the-line-between-accessibility-and-usability.md" >}} "User Experience Impossible: The Line Between Accessibility and Usability") (and different) + * (We all love surveys.) [Here’s how to avoid making a bad one]({{< ref "2014-11-10-4-tips-on-great-survey-design.md" >}} "4 Tips on Great Survey Design") + * Why [slow load times can crush your Responsive Web Design implementation]({{< ref "2014-11-18-trends-on-tuesday-speed-matters-when-measuring-responsive-web-design-performance-load-times.md" >}} "Trends on Tuesday: Speed Matters When Measuring Responsive Web Design Performance Load Times") + * How to ensure people use your site search? Here’s [one important thing NOT to do]({{< ref "2014-11-24-placeholder-text-think-outside-the-box.md" >}} "Placeholder Text: Think Outside the Box") + +Finally, if you want to get involved with the 530+ members of the [Federal User Experience Community]({{< ref "communities/user-experience.md" >}} "Federal User Experience Community"), please [email us](mailto:UXgov@gsa.gov) and we’ll get you signed up. diff --git a/content/news/2014/12/2014-12-23-challenges-round-up.md b/content/news/2014/12/2014-12-23-challenges-round-up.md index 5f1a8e2cde..b69cc81664 100644 --- a/content/news/2014/12/2014-12-23-challenges-round-up.md +++ b/content/news/2014/12/2014-12-23-challenges-round-up.md @@ -47,4 +47,4 @@ You may also be interested in watching [Why Your Challenge & Prize Competition N If you are thinking about launching a video competition then you may be interested in watching [Running a Successful Video Challenge](https://www.youtube.com/watch?v=kaK90anXf7w&index=7&list=PLd9b-GuOJ3nFeJeAHAn3Z5opohjxIw8OC). The presenter for this event, **Jason Crusan**, Director CoECI NASA, presents a case study of how NASA has used professional crowdsourcing for video creation. **Tammi Marcoullier**, Challenge.gov Program Manager, reviews getting from A to B, or how to decide what kind of video challenge you want to execute by examining your goals. -Finally, you can take a look at the [summary of our event on Design Thinking](http://youtu.be/oLAtcfGCcdc?list=PLd9b-GuOJ3nFeJeAHAn3Z5opohjxIw8OC) and how this workshop helped folks working on challenge and prize competitions think through the design and execution of their challenge. Enjoy! For more events around challenge and prize competitions check out our [Events Calendar]({{< ref "/events" >}}. For questions about Challenge.gov or the [Challenge & Prize Community of Practice]({{< ref "challenges-prizes.md" >}} "Challenges & Prizes Community") email- Below are a few articles on optimizing title tags for search engines: -
- -- Nine Best Practices For Optimized < title > Tags -
- -- Title Tag -
- -
-
-
-
- If you’re interested in learning more about search, register for our Search Is the New Big Data (in-person training) on April 10. -
+--- +slug: plain-language-page-titles-more-important-than-ever +date: 2014-03-28 1:00:27 -0400 +title: 'Plain Language Page Titles: More Important than Ever' +summary: Government Web pages are found mainly through search engines. Google recently redesigned its search results page and there are quite a few small, but impactful, changes in this latest redesign. Specifically, it affects how page titles are displayed. Many experts now recommend even +authors: + - ammie-farraj-feijoo +topics: + - content + - writing + - big-data + - search-engine-optimization +--- + +[{{< legacy-img src="2014/03/DigitalGov-Search-screen-shot-600-x-485.jpg" alt="screen grab of DigitalGov Search in Google results page" >}}](https://s3.amazonaws.com/digitalgov/_legacy-img/2014/03/DigitalGov-Search-screen-shot-600-x-485.jpg)Government Web pages are found mainly through search engines. Google recently [redesigned its search results page](http://www.fastcodesign.com/3027704/how-googles-redesigned-search-results-augur-a-more-beautiful-web) and there are quite a few small, but impactful, changes in this latest redesign. Specifically, it affects how page titles are displayed. + +Many experts now recommend even shorter page titles. Below are a couple of articles (plus tools) to see how the change may affect your page titles: + +[Page Title & Meta Description By Pixel Width In SERP Snippet](http://www.screamingfrog.co.uk/page-title-meta-description-lengths-by-pixel-width/) + +[New Title Tag Guidelines & Preview Tool](http://moz.com/blog/new-title-tag-guidelines-preview-tool) + +In addition to the suggestions offered in our [previous article on Achieving Good SEO]({{< ref "2013-05-31-four-steps-to-achieve-good-seo.md" >}}), here are a few specific tips for page titles: + +1. Place page titles in a + ++ Below are a few articles on optimizing title tags for search engines: +
+ ++ Nine Best Practices For Optimized < title > Tags +
+ ++ Title Tag +
+ +
+
+
+
+ If you’re interested in learning more about search, register for our Search Is the New Big Data (in-person training) on April 10. +
diff --git a/content/news/2014/05/2014-05-19-sign-up-for-digitalgov-citizen-services-summit-friday-may-30.md b/content/news/2014/05/2014-05-19-sign-up-for-digitalgov-citizen-services-summit-friday-may-30.md index 1f76c1638d..afe3ddd8ed 100644 --- a/content/news/2014/05/2014-05-19-sign-up-for-digitalgov-citizen-services-summit-friday-may-30.md +++ b/content/news/2014/05/2014-05-19-sign-up-for-digitalgov-citizen-services-summit-friday-may-30.md @@ -1,65 +1,65 @@ ---- -slug: sign-up-for-digitalgov-citizen-services-summit-friday-may-30 -date: 2014-05-19 3:03:16 -0400 -title: Sign up For DigitalGov Citizen Services Summit, Friday, May 30 -summary: 'We won’t build the government of the 21st century by drawing within the lines. We don’t have to tell you the hard work of building a digital government doesn’t exist in a vacuum or a bubble. Show us social media without mobile, Web without data and user experience without APIs. You can’t? That’s right—in reality,' -authors: - - rflagg - - jherman - - tammi-marcoullier - - jparcell - - apiazza - - jonathan-rubin -topics: - - api - - challenges - - content - - data - - product-management - - metrics - - mobile - - social-media - - user-experience - - DOL - - epa - - FCC - - federal-communications-commission - - GAO - - SBA - - us-department-of-labor - - us-environmental-protection-agency - - us-government-accountability-office - - us-small-business-administration ---- - -We won’t build the government of the 21st century by [drawing within the lines]({{< ref "2014-05-07-because-its-hard.md" >}}). - -We don’t have to tell you the hard work of building a digital government doesn’t exist in a vacuum or a bubble. Show us social media without mobile, Web without data and user experience without APIs. You can’t? That’s right—in reality, digital government intersects and cuts across boundaries every day in order to deliver the digital goods. - -To get ourselves thinking outside the lines in order to build an awesome 21st century government we’re bringing federal, industry and state and local employees together for the [DigitalGov Citizen Services Summit]({{< tmp"events/digitalgov-citizen-services-summit.md" >}}) on Friday, May 30. - -In our event’s panels and Expo, we’ll showcase programs that are combining, collaborating and colluding across technology boundaries to improve: - - * How agencies operate internally - * How agencies collaborate together - * How agencies engage with citizens - -Following an opening keynote by Federal Communications Commission (FCC) CIO, David Bray, our four panels will focus on: - - * performance analysis, - * customer service across channels, - * public private partnerships and - * inter-agency work. - -These panels will explore how agencies can integrate their [data]({{< ref "/topics/code" >}}data1/), [social media]({{< ref "/topics/social-media" >}}), [user experience]({{< ref "/topics/user-experience" >}}), [mobile development]({{< ref "/topics/mobile" >}}) and other programs in order to achieve the best improvements for citizen services. Confirmed speakers include: - - * Jack Bienko, Small Business Administration (SBA) - * Denise Shaw, Environmental Protection Agency (EPA) - * Sarah Kaczmarek, U.S. Government Accountability Office (GAO) - * Michael Pulsifer, Department of Labor (DOL) - -You can sign up for the event and see our [ever-expanding speaker list on the event page](https://www.google.com/url?q=https%3A%2F%2Fwww.digitalgov.gov%2Fevent%2Fdigitalgov-citizen-services-summit%2F&sa=D&sntz=1&usg=AFQjCNGiwao6z6PUtq_tcRPW1QVfhf-9WA). - -Our Expo will showcase innovations and shared services across the federal government. We have 30 tables available (and they’re going fast) for federal agencies to showcase projects and introduce yourselves to all the federal employees, contractors and state and local participants. Let us know your interest during the registration process … don’t think for a second that your program, large or small, isn’t the jam we are looking for - +--- +slug: sign-up-for-digitalgov-citizen-services-summit-friday-may-30 +date: 2014-05-19 3:03:16 -0400 +title: Sign up For DigitalGov Citizen Services Summit, Friday, May 30 +summary: 'We won’t build the government of the 21st century by drawing within the lines. We don’t have to tell you the hard work of building a digital government doesn’t exist in a vacuum or a bubble. Show us social media without mobile, Web without data and user experience without APIs. You can’t? That’s right—in reality,' +authors: + - rflagg + - jherman + - tammi-marcoullier + - jparcell + - apiazza + - jonathan-rubin +topics: + - api + - challenges + - content + - data + - product-management + - metrics + - mobile + - social-media + - user-experience + - DOL + - epa + - FCC + - federal-communications-commission + - GAO + - SBA + - us-department-of-labor + - us-environmental-protection-agency + - us-government-accountability-office + - us-small-business-administration +--- + +We won’t build the government of the 21st century by [drawing within the lines]({{< ref "2014-05-07-because-its-hard.md" >}}). + +We don’t have to tell you the hard work of building a digital government doesn’t exist in a vacuum or a bubble. Show us social media without mobile, Web without data and user experience without APIs. You can’t? That’s right—in reality, digital government intersects and cuts across boundaries every day in order to deliver the digital goods. + +To get ourselves thinking outside the lines in order to build an awesome 21st century government we’re bringing federal, industry and state and local employees together for the [DigitalGov Citizen Services Summit]({{< tmp"events/digitalgov-citizen-services-summit.md" >}}) on Friday, May 30. + +In our event’s panels and Expo, we’ll showcase programs that are combining, collaborating and colluding across technology boundaries to improve: + + * How agencies operate internally + * How agencies collaborate together + * How agencies engage with citizens + +Following an opening keynote by Federal Communications Commission (FCC) CIO, David Bray, our four panels will focus on: + + * performance analysis, + * customer service across channels, + * public private partnerships and + * inter-agency work. + +These panels will explore how agencies can integrate their [data]({{< ref "/topics/code" >}}data1/), [social media]({{< ref "/topics/social-media" >}}), [user experience]({{< ref "/topics/user-experience" >}}), [mobile development]({{< ref "/topics/mobile" >}}) and other programs in order to achieve the best improvements for citizen services. Confirmed speakers include: + + * Jack Bienko, Small Business Administration (SBA) + * Denise Shaw, Environmental Protection Agency (EPA) + * Sarah Kaczmarek, U.S. Government Accountability Office (GAO) + * Michael Pulsifer, Department of Labor (DOL) + +You can sign up for the event and see our [ever-expanding speaker list on the event page](https://www.google.com/url?q=https%3A%2F%2Fwww.digitalgov.gov%2Fevent%2Fdigitalgov-citizen-services-summit%2F&sa=D&sntz=1&usg=AFQjCNGiwao6z6PUtq_tcRPW1QVfhf-9WA). + +Our Expo will showcase innovations and shared services across the federal government. We have 30 tables available (and they’re going fast) for federal agencies to showcase projects and introduce yourselves to all the federal employees, contractors and state and local participants. Let us know your interest during the registration process … don’t think for a second that your program, large or small, isn’t the jam we are looking for + [Register Now]({{< tmp"events/digitalgov-citizen-services-summit.md" >}})! Seats are Limited! \ No newline at end of file diff --git a/content/news/2014/06/2014-06-03-harnessing-the-power-of-many-digitalgov-summit-panels-recap.md b/content/news/2014/06/2014-06-03-harnessing-the-power-of-many-digitalgov-summit-panels-recap.md index 8f3635c204..45a8d5b079 100644 --- a/content/news/2014/06/2014-06-03-harnessing-the-power-of-many-digitalgov-summit-panels-recap.md +++ b/content/news/2014/06/2014-06-03-harnessing-the-power-of-many-digitalgov-summit-panels-recap.md @@ -1,58 +1,58 @@ ---- -slug: harnessing-the-power-of-many-digitalgov-summit-panels-recap -date: 2014-06-03 15:14:11 -0400 -title: Harnessing the Power of Many—DigitalGov Summit Recap -summary: 'At the DigitalGov Citizen Services Summit last Friday, Jacob Parcell, Manager, Mobile Programs at the General Services Administration led a panel on the challenges and benefits of Inter-Agency work. The other panels were on performance analysis, customer service across channels, and public private partnerships. “The challenges are real,” said Parcell, who quoted President Obama’s famous salmon' -authors: - - alison-lemon -topics: - - challenges - - code - - content - - product-management - - metrics - - mobile - - social-media - - user-experience - - Census - - DOL - - epa - - fda - - NCI - - recaps - - us-department-of-labor - - us-environmental-protection-agency - - us-food-and-drug-administration - - united-states-census-bureau ---- - -{{< legacy-img src="2014/06/600-x-370-Jacob-Parcell-Panel-4-Inter-Agency-Work-toni470-flickr-20140530_114324.jpg" alt="Jacob Parcell, GSA - Panel 4: Inter-Agency Work - Alec Permison, Census; Lakshmi Grama, NCI; Denice Shaw, EPA; Mike Pulsifer, DOL" caption="" >}} - -At the [DigitalGov Citizen Services Summit]({{< ref "2014-05-30-digitalgov-citizen-services-summit-a-success.md" >}}) last Friday, Jacob Parcell, Manager, Mobile Programs at the General Services Administration led a panel on the challenges and benefits of Inter-Agency work. The other panels were on [performance analysis]({{< ref "2014-06-03-digitalgov-summit-panels-recap.md" >}}), [customer service across channels]({{< ref "2014-06-05-the-importance-of-cross-channel-customer-service-digitalgov-summit-recap.md" >}} "The Importance of Cross-Channel Customer Service—DigitalGov Summit Recap"), and [public private partnerships]({{< ref "2014-06-05-overcoming-barriers-digitalgov-summit-recap.md" >}} "Overcoming Barriers—DigitalGov Summit Recap"). - -“The challenges are real,” said Parcell, who quoted President Obama’s famous salmon quandary: “The Interior Department is in charge of salmon while they’re in fresh water, but the Commerce Department handles them when they’re in saltwater,” Obama said. “I hear it gets even more complicated once they’re smoked.” - -However, the benefits of Inter-Agency work can be enormous. The panel also tackled when to use a top-down versus a bottom-up approach and suggestions for improving inter-agency work. - -**Lakshmi Grama**, Senior Digital Content Strategist at the **National Cancer Institute**, knows this from her recent spearheading of a working group to create a content modeling solution. There was popular content all over government sites, from events to press releases. She knew it was not something that just one group could solve, so she harnessed the power of a 10 agency working group to produce two content models in just six months. “When people find value, they’ll work together.” - -**Alec Permison**, Applications Manager, Census.gov, at the **U.S. Census Bureau**, works on apps that pull data from multiple agencies about things like employment and economic indicators. A major challenge is that agency data can be in a many different formats. “We want an app that’s slick, but that still ensures the quality of the data.” The end product allows citizens to access information directly, “without waiting to hear it in the news.” - -One of the agencies that is used to supplying information is the **Department of Labor** (DOL). **Mike Pulsifer**, Lead IT Specialist at DOL said his agency has over 300 data sets that are public and available to developers. The have a commitment to using open source. - -**Denice Shaw**, Associate Chief Innovation Officer, Office of Research and Development, **Environmental Protection Agency**, knew the problem of nutrient pollution needed an unconventional solution. Many agencies needed the data, and although sensors existed to measure the problem, it was expensive to do so. However, by pulling together multiple stakeholders, including other agencies and academia they were able to lower the cost. - -### Top-down approach vs. a bottom-up approach - -According to Parcell, both approaches can work. When you use a bottom-up approach, “if you can find the things people are interested, you can get more people involved.” - -Grama says “the top-down approach is familiar. The top might not know about the details, they are more interested in the end product.” She also sees a lot of Web and social media folks using a bottom-up approach to figuring things out. The challenge is to articulate why it’s important to top management. - -### Improving inter-agency work - -What can we do improve the quality of inter-agency work? Grama thinks it would be beneficial for government workers to carve out time specifically to think about innovation. - -Agencies can also strive to think beyond their silos, since ultimately we work for the taxpayer. If you do work for another agency, “the taxpayer benefits even if your own agency doesn’t see the direct benefit,” said Pulsifer. - -What has been your experience with inter-agency work?_**Alison Lemon** is a [Knowledge Manager for the SocialGov Community](FIND?s=alison+lemon.md) and a Senior Analyst for Social Media with the **FDA’s Office of Women’s Health**._ +--- +slug: harnessing-the-power-of-many-digitalgov-summit-panels-recap +date: 2014-06-03 15:14:11 -0400 +title: Harnessing the Power of Many—DigitalGov Summit Recap +summary: 'At the DigitalGov Citizen Services Summit last Friday, Jacob Parcell, Manager, Mobile Programs at the General Services Administration led a panel on the challenges and benefits of Inter-Agency work. The other panels were on performance analysis, customer service across channels, and public private partnerships. “The challenges are real,” said Parcell, who quoted President Obama’s famous salmon' +authors: + - alison-lemon +topics: + - challenges + - code + - content + - product-management + - metrics + - mobile + - social-media + - user-experience + - Census + - DOL + - epa + - fda + - NCI + - recaps + - us-department-of-labor + - us-environmental-protection-agency + - us-food-and-drug-administration + - united-states-census-bureau +--- + +{{< legacy-img src="2014/06/600-x-370-Jacob-Parcell-Panel-4-Inter-Agency-Work-toni470-flickr-20140530_114324.jpg" alt="Jacob Parcell, GSA - Panel 4: Inter-Agency Work - Alec Permison, Census; Lakshmi Grama, NCI; Denice Shaw, EPA; Mike Pulsifer, DOL" caption="" >}} + +At the [DigitalGov Citizen Services Summit]({{< ref "2014-05-30-digitalgov-citizen-services-summit-a-success.md" >}}) last Friday, Jacob Parcell, Manager, Mobile Programs at the General Services Administration led a panel on the challenges and benefits of Inter-Agency work. The other panels were on [performance analysis]({{< ref "2014-06-03-digitalgov-summit-panels-recap.md" >}}), [customer service across channels]({{< ref "2014-06-05-the-importance-of-cross-channel-customer-service-digitalgov-summit-recap.md" >}} "The Importance of Cross-Channel Customer Service—DigitalGov Summit Recap"), and [public private partnerships]({{< ref "2014-06-05-overcoming-barriers-digitalgov-summit-recap.md" >}} "Overcoming Barriers—DigitalGov Summit Recap"). + +“The challenges are real,” said Parcell, who quoted President Obama’s famous salmon quandary: “The Interior Department is in charge of salmon while they’re in fresh water, but the Commerce Department handles them when they’re in saltwater,” Obama said. “I hear it gets even more complicated once they’re smoked.” + +However, the benefits of Inter-Agency work can be enormous. The panel also tackled when to use a top-down versus a bottom-up approach and suggestions for improving inter-agency work. + +**Lakshmi Grama**, Senior Digital Content Strategist at the **National Cancer Institute**, knows this from her recent spearheading of a working group to create a content modeling solution. There was popular content all over government sites, from events to press releases. She knew it was not something that just one group could solve, so she harnessed the power of a 10 agency working group to produce two content models in just six months. “When people find value, they’ll work together.” + +**Alec Permison**, Applications Manager, Census.gov, at the **U.S. Census Bureau**, works on apps that pull data from multiple agencies about things like employment and economic indicators. A major challenge is that agency data can be in a many different formats. “We want an app that’s slick, but that still ensures the quality of the data.” The end product allows citizens to access information directly, “without waiting to hear it in the news.” + +One of the agencies that is used to supplying information is the **Department of Labor** (DOL). **Mike Pulsifer**, Lead IT Specialist at DOL said his agency has over 300 data sets that are public and available to developers. The have a commitment to using open source. + +**Denice Shaw**, Associate Chief Innovation Officer, Office of Research and Development, **Environmental Protection Agency**, knew the problem of nutrient pollution needed an unconventional solution. Many agencies needed the data, and although sensors existed to measure the problem, it was expensive to do so. However, by pulling together multiple stakeholders, including other agencies and academia they were able to lower the cost. + +### Top-down approach vs. a bottom-up approach + +According to Parcell, both approaches can work. When you use a bottom-up approach, “if you can find the things people are interested, you can get more people involved.” + +Grama says “the top-down approach is familiar. The top might not know about the details, they are more interested in the end product.” She also sees a lot of Web and social media folks using a bottom-up approach to figuring things out. The challenge is to articulate why it’s important to top management. + +### Improving inter-agency work + +What can we do improve the quality of inter-agency work? Grama thinks it would be beneficial for government workers to carve out time specifically to think about innovation. + +Agencies can also strive to think beyond their silos, since ultimately we work for the taxpayer. If you do work for another agency, “the taxpayer benefits even if your own agency doesn’t see the direct benefit,” said Pulsifer. + +What has been your experience with inter-agency work?_**Alison Lemon** is a [Knowledge Manager for the SocialGov Community](FIND?s=alison+lemon.md) and a Senior Analyst for Social Media with the **FDA’s Office of Women’s Health**._ _Thanks to our special Summit blogger, Alison, who took up the Open Opportunities challenge. You can [find more opportunities to participate](http://gsablogs.gsa.gov/dsic/category/open-opportunities/)._ \ No newline at end of file diff --git a/content/news/2014/11/2014-11-04-a-picture-is-worth-a-thousand-tokens-part-ii.md b/content/news/2014/11/2014-11-04-a-picture-is-worth-a-thousand-tokens-part-ii.md index c0ca954432..61ab82a2dd 100644 --- a/content/news/2014/11/2014-11-04-a-picture-is-worth-a-thousand-tokens-part-ii.md +++ b/content/news/2014/11/2014-11-04-a-picture-is-worth-a-thousand-tokens-part-ii.md @@ -1,290 +1,290 @@ ---- -slug: a-picture-is-worth-a-thousand-tokens-part-ii -date: 2014-11-04 10:00:48 -0400 -title: 'A Picture Is Worth a Thousand Tokens: Part II' -summary: 'In the first part of A Picture Is Worth a Thousand Tokens, I explained why we built a social media-driven image search engine, and specifically how we used Elasticsearch to build its first iteration. In this week’s post, I’ll take a deep dive into how we worked to improve relevancy, recall, and the searcher’s experience' -authors: - - loren-siebert -topics: - - content - - our-work - - social-media - - instagram - - open-government - - usagov ---- - -In the first part of [_A Picture Is Worth a Thousand Tokens_]({{< ref "2014-10-28-a-picture-is-worth-a-thousand-tokens.md" >}} "A Picture Is Worth a Thousand Tokens"), I explained why we built a social media-driven image search engine, and specifically how we used Elasticsearch to build its first iteration. In this week’s post, I’ll take a deep dive into how we worked to improve relevancy, recall, and the searcher’s experience as a whole. - -## Redefine Recency - -To solve the scoring problem on older photos for archival photostreams, we decided that after some amount of time, say six weeks, we no longer wanted to keep decaying the relevancy on photos. To put that into effect, we modified the functions in the function score like this: - -[{{< legacy-img src="2014/10/600-x-186-tokens-Part-2-Redefine-Recency-code.jpg" alt="600-x-186-tokens-Part-2-Redefine-Recency-code" >}}](https://gist.github.com/loren/df85de9536216ae32b19) - -Now we only apply the Gaussian decay for photos taken in the last six weeks or so. Anything older than that gets a constant decay or negative boost equal to what it would be if the photo were about six weeks old. So rather than having the decay factor continue on down to zero, we stop it at around 0.12. For all those Civil War photos in the Library of Congress’ photostream, the date ends up being factored out of the relevancy equation and they are judged solely on their similarity score and their popularity. - -## Recognize Proximity - -To rank “County event in Jefferson Memorial” higher than “Memorial event in Jefferson County” on a search for _jefferson memorial_, the simplest way to handle it was to use a [match_phrase query](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-match-query.html#_phrase) to make the proximity of the terms a nice-to-have signal that could be factored into the overall score. The updated boolean clause matches on the phrase like this: - -[{{< legacy-img src="2014/10/600-x-186-tokens-Part-2-Recognize-Proximity-code.jpg" alt="600-x-186-tokens-Part-2-Recognize-Proximity-code" >}}](https://gist.github.com/loren/7741c52bd8e74d7ef626) - -## Account for Misspellings - -We already knew from prior projects that we’d get a lot of misspelled search terms, but we put off implementing spelling suggestions and overrides until we’d rolled out our minimum viable product in our first iteration. - -Misspelled search terms can be handled in different ways depending on your corpus and your tolerance for false positives. This shows one way of thinking about it: - -A visitor searches for _jeferson memorial_ (sic). - -Perform search with misspelled term. - -Are there any results at all for the misspelled _jeferson memorial_? - -> Show them. - -> Can we suggest a similar query that yields **more** results from our indexes (such as _jefferson memorial_)? - -> Surface suggestion above results: “Did you mean _jefferson memorial_?” - -Can we find a similar query that would yield **any** results? - -> Perform search with that new overridden corrected term. - -> Surface override above results: “We’re showing results for _jefferson memorial_.” - -The problem with suggesting a “better” search term than what the visitor typed is that it’s easy to get false positives that vary from hilarious to embarrassing: - - * You searched on _president obama_. Did you mean _obama precedent_? - * You searched on _correspondents dinner_. Did you mean _correspondence dinner_? - * You searched on _civil rights_. Did you mean _civil right_? - * You searched on _better america_. Did you mean _bitter america_? - -OK, that last one didn’t really happen, but it could have, so we put that particular problem on the back shelf and instead focused on handling cases where the visitor’s search as typed didn’t return any results from our indexes but a slight variation on the query did. To do this, we introduced a new field to the indexes called “bigram” based on a [shingle token filter](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-shingle-tokenfilter.html#analysis-shingle-tokenfilter) we called “bigram_filter.” - -The Elasticsearch settings got modified like this: - -{ - "filter": { - "bigram_filter": { - "type": "shingle" - }, - …. - } -}- -The properties in the Flickr and Instagram index mappings got modified as well. - -Flickr: - -[{{< legacy-img src="2014/10/600-x-186-tokens-Part-2-flickr-code.jpg" alt="600-x-186-tokens-Part-2-flickr-code" >}}](https://gist.github.com/loren/f08c3e2c97e7773e432e) - -Instagram: - -[{{< legacy-img src="2014/10/600-x-186-tokens-Part-2-instagram-code.jpg" alt="600-x-186-tokens-Part-2-instagram-code" >}}](https://gist.github.com/loren/89a80170b14714f074c2) - -This populates the bigram field for each index with whatever natural language fields it might have. For Instagram, it’s just the caption field, but Flickr has title and description so these are essentially appended together as they are copied into the bigram field. In both cases, they are analyzed with the shingle filter which creates bigrams out of the text. The clause of the query that generates the suggestion looks like this: - -
{ - "suggest": { - "text": "jeferson memorial", - "suggestion": { - "phrase": { - "analyzer": "bigram_analyzer", - "field": "bigram", - "size": 1, - "direct_generator": [ - { - "field": "bigram", - "prefix_len": 1 - } - ], - "highlight": { - "pre_tag": "", - "post_tag": "<\/strong>" - } - } - } - } -}- - -
- We only care about the top suggestion, and we’re willing to take the small performance penalty of using just the first letter of the search term as the starting point for the suggestion rather than the default two-character prefix. -
- - -- Here’s an example of how bigrams really help generate relevant multi-word suggestions. -
- - -- An image search on USA.gov for correspondence generates lots of results. Misspell it and search on correspondense and it works as you might expect, showing results for correspondence. -
- - -- But now when you search on correspondense dinner, you get results for correspondents dinner. It correctly recommends correspondents dinner even though correspondence has a higher term frequency than correspondents does. -
- - -- Bigrams (word pairs) let us generate phrase suggestions rather than term suggestions by giving the suggester some collocation information. This increases the likelihood of a good suggestion for a multi-word search query when there are multiple possibilities for each individual word in the query. -
- - -- Most of the near-duplicate photo problems came from Flickr profiles. Flickr has the notion of an album, so we thought we could take advantage of this and save ourselves a lot of work building a classifier. Even if retrieving a photo’s albums (they can belong to many) from the Flickr API had been straightforward, it would still not have helped as some albums contain thousands of very different photos. Some of the Library of Congress albums on Flickr have over 10,000 photos, all with very different titles and descriptions. -
- - -- As we were already using Elasticsearch to do everything else, we wondered if it could also help us group photos into albums and then return just the most relevant photo from each album in the search results. The answer turned out to be “yes” on both fronts by using the more_like_this query as a starting point for classification and the top_hits aggregation to pluck the best photos from each album. -
- - -- First we added an unanalyzed “album” field to the mappings on each index: -
- - -{ - "album": { - "type": "string", - "index": "not_analyzed" - } -}- - -
- Then we established some criteria to describe when two photos should be considered part of the same album: -
- - -- For a given Flickr photo with ID #12345, this query finds other Flickr photos from the same Flickr user profile “flickr_user_1@n02” also taken on April 23rd, 2012 that could potentially be grouped into the same album: -
- - - - - -- The filter part of this query is straightforward, as it’s just enforcing two of the criteria we established for classifying photos. The more_like_this (MLT) part is actually broken down into multiple pieces, each with its own parameters, and wrapped up in a boolean clause. For all of the MLT queries, we set the minimum term frequency to 1 as a given term may only show up once in any particular field. The max_query_terms parameter is raised up really high to 500 terms, as sometimes a field can have that many terms in it and we want to take them all into account. From there, we just used some trial and error to see what percent_terms_to_match threshold to use for each field. -
- - -- The aggregation on the raw document scores came about after looking at the distribution of relevancy scores from the MLT query. Often, some group of, say, 100 photos would be pretty similar to a given photo, but the distribution of scores would be clumped around a few scores. Perhaps 60 photos would have an identical score of 4.5 and another 20 would have the same score of 4.4, and next group down would have a few clumped much lower at 0.6 and then the remainder would have different but all very low scores. The photos that ended up with the same scores to each other tended to have identical metadata. Usually the first two buckets from the aggregations would have very similar scores, so we assigned all of those photos to the same Elasticsearch album. -
- - -- Now that we had some notion of an album, we needed to pick the most relevant photo from each album and then sort all of those top picks by their relevancy scores to generate the actual search results. And don’t forget, we could be searching across hundreds of thousands of albums spanning hundreds of Flickr and Instagram profiles, and we still need to take each photo’s dynamic recency and popularity into account and then blend the results from both Flickr and Instagram indexes. And ideally, all this should happen within a few dozen milliseconds. It seems like an awfully tall order but the top_hits query made it pretty simple. The filtered query part of our request remained the same. We just added a nested aggregation to bucket by album and then pick the top hit from each album: -
- - -{ - "aggs": { - "album_agg": { - "terms": { - "field": "album", - "order": { - "top_score": "desc" - } - }, - "aggs": { - "top_image_hits": { - "top_hits": { - "size": 1 - } - }, - "top_score": { - "max": { - "script": "_doc.score" - } - } - } - } - } -} -- - -
- We changed the type of query to the more efficient search_count, as we no longer needed “hits”. We are only looking at the aggregation buckets now. -
- - --- - -- GET http://localhost:9200/development-asis-flickr_photos,development-asis-instagram_photos/_search?search_type=count&size=0 -
-
- Like any fuzzy matching solution, this album classification strategy is practically guaranteed to both under-classify photos that should be in the same album as well as over-classify photos that should be kept separate. But we were pretty confident that the search experience had improved, and were impressed with how easy Elasticsearch made it to pull a solution together. -
- - -- One downside is that the aggregation query is more CPU and memory intensive than the more typical “hits” query we had before, but we still get results in well under 100ms and we haven’t done anything to optimize it yet. The other problem we created with these aggregated results centered around pagination. If you request 10 results from the API, the 10 photos you get may each come from a different album, and each album may have thousands of photos. So the 10th photo might actually have been the 10,000th “hit”. And while it’s easy for Elasticsearch to tell you how many total hits were found, currently there’s no cheap way of knowing how many potential buckets you’ll have in an aggregation unless you go and compute them all, and that can lead to both memory problems and wasted CPU. -
- - -- Although Elasticsearch defaults to five shards per index, we put each image index in just one shard. As we are relying so heavily on relevance across potentially small populations of photos, we wanted the results to be as accurate as possible (see Elasticsearch’s Relevance Is Broken!). -
- - -- With just a million photos in our initial index this is not a problem, but a billion photos will require the sort of horizontal scaling that Elasticsearch is known for. Changing the number of shards will require a full reindex. We also update our synonyms from time to time, and that requires reindexing, too. To accommodate this without any downtime, we use index aliases. We spin up a new index in the background, populate it with stream2es, and just adjust the alias on the running system in real-time. As the number of shards grows, we can experiment with routing the indexing and the queries to hit the same shards. -
- - -- Many Elasticsearch articles involve closed proprietary systems that cannot be fully shared with the rest of the world. With ASIS, we’ve taken a different approach and published the entire codebase along with this explanation of how we went about building it and the decisions (good and bad) we made along the way. This stemmed from our commitment to transparency and open government, and we’d also like others to be able to fork the ASIS codebase and either help improve it or perhaps just use it to build their own image search engine. +--- +slug: a-picture-is-worth-a-thousand-tokens-part-ii +date: 2014-11-04 10:00:48 -0400 +title: 'A Picture Is Worth a Thousand Tokens: Part II' +summary: 'In the first part of A Picture Is Worth a Thousand Tokens, I explained why we built a social media-driven image search engine, and specifically how we used Elasticsearch to build its first iteration. In this week’s post, I’ll take a deep dive into how we worked to improve relevancy, recall, and the searcher’s experience' +authors: + - loren-siebert +topics: + - content + - our-work + - social-media + - instagram + - open-government + - usagov +--- + +In the first part of [_A Picture Is Worth a Thousand Tokens_]({{< ref "2014-10-28-a-picture-is-worth-a-thousand-tokens.md" >}} "A Picture Is Worth a Thousand Tokens"), I explained why we built a social media-driven image search engine, and specifically how we used Elasticsearch to build its first iteration. In this week’s post, I’ll take a deep dive into how we worked to improve relevancy, recall, and the searcher’s experience as a whole. + +## Redefine Recency + +To solve the scoring problem on older photos for archival photostreams, we decided that after some amount of time, say six weeks, we no longer wanted to keep decaying the relevancy on photos. To put that into effect, we modified the functions in the function score like this: + +[{{< legacy-img src="2014/10/600-x-186-tokens-Part-2-Redefine-Recency-code.jpg" alt="600-x-186-tokens-Part-2-Redefine-Recency-code" >}}](https://gist.github.com/loren/df85de9536216ae32b19) + +Now we only apply the Gaussian decay for photos taken in the last six weeks or so. Anything older than that gets a constant decay or negative boost equal to what it would be if the photo were about six weeks old. So rather than having the decay factor continue on down to zero, we stop it at around 0.12. For all those Civil War photos in the Library of Congress’ photostream, the date ends up being factored out of the relevancy equation and they are judged solely on their similarity score and their popularity. + +## Recognize Proximity + +To rank “County event in Jefferson Memorial” higher than “Memorial event in Jefferson County” on a search for _jefferson memorial_, the simplest way to handle it was to use a [match_phrase query](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-match-query.html#_phrase) to make the proximity of the terms a nice-to-have signal that could be factored into the overall score. The updated boolean clause matches on the phrase like this: + +[{{< legacy-img src="2014/10/600-x-186-tokens-Part-2-Recognize-Proximity-code.jpg" alt="600-x-186-tokens-Part-2-Recognize-Proximity-code" >}}](https://gist.github.com/loren/7741c52bd8e74d7ef626) + +## Account for Misspellings + +We already knew from prior projects that we’d get a lot of misspelled search terms, but we put off implementing spelling suggestions and overrides until we’d rolled out our minimum viable product in our first iteration. + +Misspelled search terms can be handled in different ways depending on your corpus and your tolerance for false positives. This shows one way of thinking about it: + +A visitor searches for _jeferson memorial_ (sic). + +Perform search with misspelled term. + +Are there any results at all for the misspelled _jeferson memorial_? + +> Show them. + +> Can we suggest a similar query that yields **more** results from our indexes (such as _jefferson memorial_)? + +> Surface suggestion above results: “Did you mean _jefferson memorial_?” + +Can we find a similar query that would yield **any** results? + +> Perform search with that new overridden corrected term. + +> Surface override above results: “We’re showing results for _jefferson memorial_.” + +The problem with suggesting a “better” search term than what the visitor typed is that it’s easy to get false positives that vary from hilarious to embarrassing: + + * You searched on _president obama_. Did you mean _obama precedent_? + * You searched on _correspondents dinner_. Did you mean _correspondence dinner_? + * You searched on _civil rights_. Did you mean _civil right_? + * You searched on _better america_. Did you mean _bitter america_? + +OK, that last one didn’t really happen, but it could have, so we put that particular problem on the back shelf and instead focused on handling cases where the visitor’s search as typed didn’t return any results from our indexes but a slight variation on the query did. To do this, we introduced a new field to the indexes called “bigram” based on a [shingle token filter](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-shingle-tokenfilter.html#analysis-shingle-tokenfilter) we called “bigram_filter.” + +The Elasticsearch settings got modified like this: + +
{ + "filter": { + "bigram_filter": { + "type": "shingle" + }, + …. + } +}+ +The properties in the Flickr and Instagram index mappings got modified as well. + +Flickr: + +[{{< legacy-img src="2014/10/600-x-186-tokens-Part-2-flickr-code.jpg" alt="600-x-186-tokens-Part-2-flickr-code" >}}](https://gist.github.com/loren/f08c3e2c97e7773e432e) + +Instagram: + +[{{< legacy-img src="2014/10/600-x-186-tokens-Part-2-instagram-code.jpg" alt="600-x-186-tokens-Part-2-instagram-code" >}}](https://gist.github.com/loren/89a80170b14714f074c2) + +This populates the bigram field for each index with whatever natural language fields it might have. For Instagram, it’s just the caption field, but Flickr has title and description so these are essentially appended together as they are copied into the bigram field. In both cases, they are analyzed with the shingle filter which creates bigrams out of the text. The clause of the query that generates the suggestion looks like this: + +
{ + "suggest": { + "text": "jeferson memorial", + "suggestion": { + "phrase": { + "analyzer": "bigram_analyzer", + "field": "bigram", + "size": 1, + "direct_generator": [ + { + "field": "bigram", + "prefix_len": 1 + } + ], + "highlight": { + "pre_tag": "", + "post_tag": "<\/strong>" + } + } + } + } +}+ + +
+ We only care about the top suggestion, and we’re willing to take the small performance penalty of using just the first letter of the search term as the starting point for the suggestion rather than the default two-character prefix. +
+ + ++ Here’s an example of how bigrams really help generate relevant multi-word suggestions. +
+ + ++ An image search on USA.gov for correspondence generates lots of results. Misspell it and search on correspondense and it works as you might expect, showing results for correspondence. +
+ + ++ But now when you search on correspondense dinner, you get results for correspondents dinner. It correctly recommends correspondents dinner even though correspondence has a higher term frequency than correspondents does. +
+ + ++ Bigrams (word pairs) let us generate phrase suggestions rather than term suggestions by giving the suggester some collocation information. This increases the likelihood of a good suggestion for a multi-word search query when there are multiple possibilities for each individual word in the query. +
+ + ++ Most of the near-duplicate photo problems came from Flickr profiles. Flickr has the notion of an album, so we thought we could take advantage of this and save ourselves a lot of work building a classifier. Even if retrieving a photo’s albums (they can belong to many) from the Flickr API had been straightforward, it would still not have helped as some albums contain thousands of very different photos. Some of the Library of Congress albums on Flickr have over 10,000 photos, all with very different titles and descriptions. +
+ + ++ As we were already using Elasticsearch to do everything else, we wondered if it could also help us group photos into albums and then return just the most relevant photo from each album in the search results. The answer turned out to be “yes” on both fronts by using the more_like_this query as a starting point for classification and the top_hits aggregation to pluck the best photos from each album. +
+ + ++ First we added an unanalyzed “album” field to the mappings on each index: +
+ + +{ + "album": { + "type": "string", + "index": "not_analyzed" + } +}+ + +
+ Then we established some criteria to describe when two photos should be considered part of the same album: +
+ + ++ For a given Flickr photo with ID #12345, this query finds other Flickr photos from the same Flickr user profile “flickr_user_1@n02” also taken on April 23rd, 2012 that could potentially be grouped into the same album: +
+ + + + + ++ The filter part of this query is straightforward, as it’s just enforcing two of the criteria we established for classifying photos. The more_like_this (MLT) part is actually broken down into multiple pieces, each with its own parameters, and wrapped up in a boolean clause. For all of the MLT queries, we set the minimum term frequency to 1 as a given term may only show up once in any particular field. The max_query_terms parameter is raised up really high to 500 terms, as sometimes a field can have that many terms in it and we want to take them all into account. From there, we just used some trial and error to see what percent_terms_to_match threshold to use for each field. +
+ + ++ The aggregation on the raw document scores came about after looking at the distribution of relevancy scores from the MLT query. Often, some group of, say, 100 photos would be pretty similar to a given photo, but the distribution of scores would be clumped around a few scores. Perhaps 60 photos would have an identical score of 4.5 and another 20 would have the same score of 4.4, and next group down would have a few clumped much lower at 0.6 and then the remainder would have different but all very low scores. The photos that ended up with the same scores to each other tended to have identical metadata. Usually the first two buckets from the aggregations would have very similar scores, so we assigned all of those photos to the same Elasticsearch album. +
+ + ++ Now that we had some notion of an album, we needed to pick the most relevant photo from each album and then sort all of those top picks by their relevancy scores to generate the actual search results. And don’t forget, we could be searching across hundreds of thousands of albums spanning hundreds of Flickr and Instagram profiles, and we still need to take each photo’s dynamic recency and popularity into account and then blend the results from both Flickr and Instagram indexes. And ideally, all this should happen within a few dozen milliseconds. It seems like an awfully tall order but the top_hits query made it pretty simple. The filtered query part of our request remained the same. We just added a nested aggregation to bucket by album and then pick the top hit from each album: +
+ + +{ + "aggs": { + "album_agg": { + "terms": { + "field": "album", + "order": { + "top_score": "desc" + } + }, + "aggs": { + "top_image_hits": { + "top_hits": { + "size": 1 + } + }, + "top_score": { + "max": { + "script": "_doc.score" + } + } + } + } + } +} ++ + +
+ We changed the type of query to the more efficient search_count, as we no longer needed “hits”. We are only looking at the aggregation buckets now. +
+ + +++ + ++ GET http://localhost:9200/development-asis-flickr_photos,development-asis-instagram_photos/_search?search_type=count&size=0 +
+
+ Like any fuzzy matching solution, this album classification strategy is practically guaranteed to both under-classify photos that should be in the same album as well as over-classify photos that should be kept separate. But we were pretty confident that the search experience had improved, and were impressed with how easy Elasticsearch made it to pull a solution together. +
+ + ++ One downside is that the aggregation query is more CPU and memory intensive than the more typical “hits” query we had before, but we still get results in well under 100ms and we haven’t done anything to optimize it yet. The other problem we created with these aggregated results centered around pagination. If you request 10 results from the API, the 10 photos you get may each come from a different album, and each album may have thousands of photos. So the 10th photo might actually have been the 10,000th “hit”. And while it’s easy for Elasticsearch to tell you how many total hits were found, currently there’s no cheap way of knowing how many potential buckets you’ll have in an aggregation unless you go and compute them all, and that can lead to both memory problems and wasted CPU. +
+ + ++ Although Elasticsearch defaults to five shards per index, we put each image index in just one shard. As we are relying so heavily on relevance across potentially small populations of photos, we wanted the results to be as accurate as possible (see Elasticsearch’s Relevance Is Broken!). +
+ + ++ With just a million photos in our initial index this is not a problem, but a billion photos will require the sort of horizontal scaling that Elasticsearch is known for. Changing the number of shards will require a full reindex. We also update our synonyms from time to time, and that requires reindexing, too. To accommodate this without any downtime, we use index aliases. We spin up a new index in the background, populate it with stream2es, and just adjust the alias on the running system in real-time. As the number of shards grows, we can experiment with routing the indexing and the queries to hit the same shards. +
+ + ++ Many Elasticsearch articles involve closed proprietary systems that cannot be fully shared with the rest of the world. With ASIS, we’ve taken a different approach and published the entire codebase along with this explanation of how we went about building it and the decisions (good and bad) we made along the way. This stemmed from our commitment to transparency and open government, and we’d also like others to be able to fork the ASIS codebase and either help improve it or perhaps just use it to build their own image search engine.
\ No newline at end of file diff --git a/content/news/2014/11/2014-11-07-welcome-to-user-experience-month.md b/content/news/2014/11/2014-11-07-welcome-to-user-experience-month.md index cd82db2fde..4704f163ca 100644 --- a/content/news/2014/11/2014-11-07-welcome-to-user-experience-month.md +++ b/content/news/2014/11/2014-11-07-welcome-to-user-experience-month.md @@ -1,36 +1,36 @@ ---- -slug: welcome-to-user-experience-month -date: 2014-11-07 12:00:33 -0400 -title: Welcome to User Experience Month! -summary: 'One challenge with digital government: it’s hard to see people. If you work at a U.S. Post Office, you interact with your customers, talk with them, and even see what they are feeling by looking at their faces. You can understand their experience fairly easily. In the digital world, technology decreases physical distance but increases' -authors: - - jonathan-rubin -topics: - - monthly-theme - - cx - - digitalgov-user-experience-program - - user-experience - ---- - -{{< legacy-img src="2014/11/600-x-250-UX-monthly-theme-slider-by-Jessica-Skretch-FTCgov.jpg" alt="Jessica Skretch, FTC" caption="" >}} - -One challenge with digital government: it’s hard to see people. - -If you work at a U.S. Post Office, you interact with your customers, talk with them, and even see what they are feeling by looking at their faces. You can understand their experience fairly easily. In the digital world, technology decreases physical distance but increases the personal distance between us and our audience. Often we have to make sense of piles of data and user comments to determine if people even like what we offer or find it valuable. - -So, in addition to collecting good analytics (like through GSA’s free [Digital Analytics Program]({{< ref "/guides/dap/_index.md" >}} "DAP: Digital Analytics Program")), it’s crucial to understand your how your customers use your technology on a one-to-one basis. That’s why you focus on the User Experience (or UX); a product’s ease-of-use, whether it looks nice or creates any emotional friction, and if people can use it to accomplish something they want. - -User Experience is closely related to [Customer Experience]({{< ref "2014-07-07-user-experience-ux-vs-customer-experience-cx-whats-the-dif.md" >}} "User Experience (UX) vs. Customer Experience (CX): What’s the Dif?"), and the [User Experience]({{< ref "digitalgov-user-experience-resources.md" >}} "DigitalGov User Experience Program") program that I manage at GSA helps: build UX teams at agencies across the federal government, them to understand their customers’ needs, and build products centered around them. - -For this month’s UX theme, we’re hitting this topic from lots of angles: - - * The exciting [results from our Federal User Experience research study]({{< ref "2014-11-21-results-2014-federal-user-experience-survey.md" >}} "Results: 2014 Federal User Experience Survey") - * [How Plain Language saved a Department of Education website]({{< ref "2014-11-14-institute-for-education-sciences-usability-case-study.md" >}} "Institute for Education Sciences – Usability Case Study") - * See great [recorded presentations about User Experience by DigitalGov University]({{< ref "2014-11-26-usability-events-round-up-2014.md" >}} "Usability Events Round-Up: 2014") - * How [Accessibility and Usability are similar]({{< ref "2014-11-17-user-experience-impossible-the-line-between-accessibility-and-usability.md" >}} "User Experience Impossible: The Line Between Accessibility and Usability") (and different) - * (We all love surveys.) [Here’s how to avoid making a bad one]({{< ref "2014-11-10-4-tips-on-great-survey-design.md" >}} "4 Tips on Great Survey Design") - * Why [slow load times can crush your Responsive Web Design implementation]({{< ref "2014-11-18-trends-on-tuesday-speed-matters-when-measuring-responsive-web-design-performance-load-times.md" >}} "Trends on Tuesday: Speed Matters When Measuring Responsive Web Design Performance Load Times") - * How to ensure people use your site search? Here’s [one important thing NOT to do]({{< ref "2014-11-24-placeholder-text-think-outside-the-box.md" >}} "Placeholder Text: Think Outside the Box") - -Finally, if you want to get involved with the 530+ members of the [Federal User Experience Community]({{< ref "communities/user-experience.md" >}} "Federal User Experience Community"), please [email us](mailto:UXgov@gsa.gov) and we’ll get you signed up. +--- +slug: welcome-to-user-experience-month +date: 2014-11-07 12:00:33 -0400 +title: Welcome to User Experience Month! +summary: 'One challenge with digital government: it’s hard to see people. If you work at a U.S. Post Office, you interact with your customers, talk with them, and even see what they are feeling by looking at their faces. You can understand their experience fairly easily. In the digital world, technology decreases physical distance but increases' +authors: + - jonathan-rubin +topics: + - monthly-theme + - cx + - digitalgov-user-experience-program + - user-experience + +--- + +{{< legacy-img src="2014/11/600-x-250-UX-monthly-theme-slider-by-Jessica-Skretch-FTCgov.jpg" alt="Jessica Skretch, FTC" caption="" >}} + +One challenge with digital government: it’s hard to see people. + +If you work at a U.S. Post Office, you interact with your customers, talk with them, and even see what they are feeling by looking at their faces. You can understand their experience fairly easily. In the digital world, technology decreases physical distance but increases the personal distance between us and our audience. Often we have to make sense of piles of data and user comments to determine if people even like what we offer or find it valuable. + +So, in addition to collecting good analytics (like through GSA’s free [Digital Analytics Program]({{< ref "/guides/dap/_index.md" >}} "DAP: Digital Analytics Program")), it’s crucial to understand your how your customers use your technology on a one-to-one basis. That’s why you focus on the User Experience (or UX); a product’s ease-of-use, whether it looks nice or creates any emotional friction, and if people can use it to accomplish something they want. + +User Experience is closely related to [Customer Experience]({{< ref "2014-07-07-user-experience-ux-vs-customer-experience-cx-whats-the-dif.md" >}} "User Experience (UX) vs. Customer Experience (CX): What’s the Dif?"), and the [User Experience]({{< ref "digitalgov-user-experience-resources.md" >}} "DigitalGov User Experience Program") program that I manage at GSA helps: build UX teams at agencies across the federal government, them to understand their customers’ needs, and build products centered around them. + +For this month’s UX theme, we’re hitting this topic from lots of angles: + + * The exciting [results from our Federal User Experience research study]({{< ref "2014-11-21-results-2014-federal-user-experience-survey.md" >}} "Results: 2014 Federal User Experience Survey") + * [How Plain Language saved a Department of Education website]({{< ref "2014-11-14-institute-for-education-sciences-usability-case-study.md" >}} "Institute for Education Sciences – Usability Case Study") + * See great [recorded presentations about User Experience by DigitalGov University]({{< ref "2014-11-26-usability-events-round-up-2014.md" >}} "Usability Events Round-Up: 2014") + * How [Accessibility and Usability are similar]({{< ref "2014-11-17-user-experience-impossible-the-line-between-accessibility-and-usability.md" >}} "User Experience Impossible: The Line Between Accessibility and Usability") (and different) + * (We all love surveys.) [Here’s how to avoid making a bad one]({{< ref "2014-11-10-4-tips-on-great-survey-design.md" >}} "4 Tips on Great Survey Design") + * Why [slow load times can crush your Responsive Web Design implementation]({{< ref "2014-11-18-trends-on-tuesday-speed-matters-when-measuring-responsive-web-design-performance-load-times.md" >}} "Trends on Tuesday: Speed Matters When Measuring Responsive Web Design Performance Load Times") + * How to ensure people use your site search? Here’s [one important thing NOT to do]({{< ref "2014-11-24-placeholder-text-think-outside-the-box.md" >}} "Placeholder Text: Think Outside the Box") + +Finally, if you want to get involved with the 530+ members of the [Federal User Experience Community]({{< ref "communities/user-experience.md" >}} "Federal User Experience Community"), please [email us](mailto:UXgov@gsa.gov) and we’ll get you signed up. diff --git a/content/news/2014/12/2014-12-23-challenges-round-up.md b/content/news/2014/12/2014-12-23-challenges-round-up.md index b69cc81664..5f1a8e2cde 100644 --- a/content/news/2014/12/2014-12-23-challenges-round-up.md +++ b/content/news/2014/12/2014-12-23-challenges-round-up.md @@ -47,4 +47,4 @@ You may also be interested in watching [Why Your Challenge & Prize Competition N If you are thinking about launching a video competition then you may be interested in watching [Running a Successful Video Challenge](https://www.youtube.com/watch?v=kaK90anXf7w&index=7&list=PLd9b-GuOJ3nFeJeAHAn3Z5opohjxIw8OC). The presenter for this event, **Jason Crusan**, Director CoECI NASA, presents a case study of how NASA has used professional crowdsourcing for video creation. **Tammi Marcoullier**, Challenge.gov Program Manager, reviews getting from A to B, or how to decide what kind of video challenge you want to execute by examining your goals. -Finally, you can take a look at the [summary of our event on Design Thinking](http://youtu.be/oLAtcfGCcdc?list=PLd9b-GuOJ3nFeJeAHAn3Z5opohjxIw8OC) and how this workshop helped folks working on challenge and prize competitions think through the design and execution of their challenge. Enjoy! For more events around challenge and prize competitions check out our [Events Calendar]({{< ref "/events" >}}). For questions about Challenge.gov or the [Challenge & Prize Community of Practice]({{< ref "challenges-prizes.md" >}} "Challenges & Prizes Community") email
+1. Place page titles in a <title>
tag within the <head>
.
+2. There’s no magic number, but around 55 characters or less is good.
+3. There’s no set syntax, but “Primary Keyword – Secondary Keyword | Brand Name” is good.
+
Below are a few articles on optimizing title tags for search engines: -
+ -+
Nine Best Practices For Optimized < title > Tags -
+ -+
-
-
-
-
+ Title: the most important element of a quality Web page
-+
If you’re interested in learning more about search, register for our Search Is the New Big Data (in-person training) on April 10.
diff --git a/content/news/2014/05/2014-05-19-sign-up-for-digitalgov-citizen-services-summit-friday-may-30.md b/content/news/2014/05/2014-05-19-sign-up-for-digitalgov-citizen-services-summit-friday-may-30.md index afe3ddd8ed..a858ee2b7f 100644 --- a/content/news/2014/05/2014-05-19-sign-up-for-digitalgov-citizen-services-summit-friday-may-30.md +++ b/content/news/2014/05/2014-05-19-sign-up-for-digitalgov-citizen-services-summit-friday-may-30.md @@ -51,7 +51,7 @@ Following an opening keynote by Federal Communications Commission (FCC) CIO, Dav * public private partnerships and * inter-agency work. -These panels will explore how agencies can integrate their [data]({{< ref "/topics/code" >}}data1/), [social media]({{< ref "/topics/social-media" >}}), [user experience]({{< ref "/topics/user-experience" >}}), [mobile development]({{< ref "/topics/mobile" >}}) and other programs in order to achieve the best improvements for citizen services. Confirmed speakers include: +These panels will explore how agencies can integrate their [data]({{< ref "/topics/code" >}}), [social media]({{< ref "/topics/social-media" >}}), [user experience]({{< ref "/topics/user-experience" >}}), [mobile development]({{< ref "/topics/mobile" >}}) and other programs in order to achieve the best improvements for citizen services. Confirmed speakers include: * Jack Bienko, Small Business Administration (SBA) * Denise Shaw, Environmental Protection Agency (EPA) diff --git a/content/news/2014/06/2014-06-03-harnessing-the-power-of-many-digitalgov-summit-panels-recap.md b/content/news/2014/06/2014-06-03-harnessing-the-power-of-many-digitalgov-summit-panels-recap.md index 45a8d5b079..a01cb647b8 100644 --- a/content/news/2014/06/2014-06-03-harnessing-the-power-of-many-digitalgov-summit-panels-recap.md +++ b/content/news/2014/06/2014-06-03-harnessing-the-power-of-many-digitalgov-summit-panels-recap.md @@ -54,5 +54,5 @@ What can we do improve the quality of inter-agency work? Grama thinks it would b Agencies can also strive to think beyond their silos, since ultimately we work for the taxpayer. If you do work for another agency, “the taxpayer benefits even if your own agency doesn’t see the direct benefit,” said Pulsifer. -What has been your experience with inter-agency work?_**Alison Lemon** is a [Knowledge Manager for the SocialGov Community](FIND?s=alison+lemon.md) and a Senior Analyst for Social Media with the **FDA’s Office of Women’s Health**._ +What has been your experience with inter-agency work?_**Alison Lemon** is a [Knowledge Manager for the SocialGov Community]({{}}) and a Senior Analyst for Social Media with the **FDA’s Office of Women’s Health**._ _Thanks to our special Summit blogger, Alison, who took up the Open Opportunities challenge. You can [find more opportunities to participate](http://gsablogs.gsa.gov/dsic/category/open-opportunities/)._ \ No newline at end of file diff --git a/content/news/2014/11/2014-11-04-a-picture-is-worth-a-thousand-tokens-part-ii.md b/content/news/2014/11/2014-11-04-a-picture-is-worth-a-thousand-tokens-part-ii.md index 61ab82a2dd..485c5a18dc 100644 --- a/content/news/2014/11/2014-11-04-a-picture-is-worth-a-thousand-tokens-part-ii.md +++ b/content/news/2014/11/2014-11-04-a-picture-is-worth-a-thousand-tokens-part-ii.md @@ -102,7 +102,7 @@ This populates the bigram field for each index with whatever natural language fi ], "highlight": { "pre_tag": "", - "post_tag": "<\/strong>" + "post_tag": "" } } } diff --git a/content/news/2014/11/2014-11-07-welcome-to-user-experience-month.md b/content/news/2014/11/2014-11-07-welcome-to-user-experience-month.md index 4704f163ca..8120445304 100644 --- a/content/news/2014/11/2014-11-07-welcome-to-user-experience-month.md +++ b/content/news/2014/11/2014-11-07-welcome-to-user-experience-month.md @@ -21,7 +21,7 @@ If you work at a U.S. Post Office, you interact with your customers, talk with t So, in addition to collecting good analytics (like through GSA’s free [Digital Analytics Program]({{< ref "/guides/dap/_index.md" >}} "DAP: Digital Analytics Program")), it’s crucial to understand your how your customers use your technology on a one-to-one basis. That’s why you focus on the User Experience (or UX); a product’s ease-of-use, whether it looks nice or creates any emotional friction, and if people can use it to accomplish something they want. -User Experience is closely related to [Customer Experience]({{< ref "2014-07-07-user-experience-ux-vs-customer-experience-cx-whats-the-dif.md" >}} "User Experience (UX) vs. Customer Experience (CX): What’s the Dif?"), and the [User Experience]({{< ref "digitalgov-user-experience-resources.md" >}} "DigitalGov User Experience Program") program that I manage at GSA helps: build UX teams at agencies across the federal government, them to understand their customers’ needs, and build products centered around them. +User Experience is closely related to [Customer Experience]({{< ref "2014-07-07-user-experience-ux-vs-customer-experience-cx-whats-the-dif.md" >}}), and the [User Experience]({{< ref "digitalgov-user-experience-resources.md" >}} "DigitalGov User Experience Program") program that I manage at GSA helps: build UX teams at agencies across the federal government, them to understand their customers’ needs, and build products centered around them. For this month’s UX theme, we’re hitting this topic from lots of angles: diff --git a/content/news/2014/12/2014-12-23-challenges-round-up.md b/content/news/2014/12/2014-12-23-challenges-round-up.md index 5f1a8e2cde..b69cc81664 100644 --- a/content/news/2014/12/2014-12-23-challenges-round-up.md +++ b/content/news/2014/12/2014-12-23-challenges-round-up.md @@ -47,4 +47,4 @@ You may also be interested in watching [Why Your Challenge & Prize Competition N If you are thinking about launching a video competition then you may be interested in watching [Running a Successful Video Challenge](https://www.youtube.com/watch?v=kaK90anXf7w&index=7&list=PLd9b-GuOJ3nFeJeAHAn3Z5opohjxIw8OC). The presenter for this event, **Jason Crusan**, Director CoECI NASA, presents a case study of how NASA has used professional crowdsourcing for video creation. **Tammi Marcoullier**, Challenge.gov Program Manager, reviews getting from A to B, or how to decide what kind of video challenge you want to execute by examining your goals. -Finally, you can take a look at the [summary of our event on Design Thinking](http://youtu.be/oLAtcfGCcdc?list=PLd9b-GuOJ3nFeJeAHAn3Z5opohjxIw8OC) and how this workshop helped folks working on challenge and prize competitions think through the design and execution of their challenge. Enjoy! For more events around challenge and prize competitions check out our [Events Calendar]({{< ref "/events" >}}. For questions about Challenge.gov or the [Challenge & Prize Community of Practice]({{< ref "challenges-prizes.md" >}} "Challenges & Prizes Community") email
-1. Place page titles in a <title>
tag within the <head>
.
-2. There’s no magic number, but around 55 characters or less is good.
-3. There’s no set syntax, but “Primary Keyword – Secondary Keyword | Brand Name” is good.
- Below are a few articles on optimizing title tags for search engines: -
+1. Place page titles in a<title>
tag within the <head>
.
+2. There’s no magic number, but around 55 characters or less is good.
+3. There’s no set syntax, but “Primary Keyword – Secondary Keyword | Brand Name” is good.
- - Nine Best Practices For Optimized < title > Tags -
+Below are a few articles on optimizing title tags for search engines: -- Title Tag -
+* Nine Best Practices For Optimized < title > Tags +* Title Tag +* Title: the most important element of a quality Web page -- Title: the most important element of a quality Web page
+_If you’re interested in learning more about search, register for our Search Is the New Big Data (in-person training) on April 10._ -- If you’re interested in learning more about search, register for our Search Is the New Big Data (in-person training) on April 10. -
From 8ff7b027a9dd1c91310b7d48efecfce7bcc5cfe6 Mon Sep 17 00:00:00 2001 From: Toni Bonitto