Tag Archives: Testing

Localization Testing Practices: Engineering the International Quality

Summary:  In the present software industry, ‘Globalization’ is one of the most important steps in ensuring the product is ready for the international markets. Thus, the International Quality Engineering becomes a crucial part of the entire engineering cycle. This article tries to shed some light on quality engineering as applicable in different Globalization phases, as well as discuss some good practices that could be utilized to engineer a high quality international software.

There was a time when software packages were happily monolingual and successful, but the new age ones need to be developed to serve the global customers as they start to expect not just adaptation, but personalization. As these products cater to customers and workflows across the world, it really becomes important to ensure their quality, keeping in mind the lingual and cultural aspects of the region that the product is intended to address.

The question lies in identifying what makes a high quality internationalized software? What aspects should be covered as a part of localization testing? Let’s look at the different testing practices adopted in this validation, and how cultural aspects or the text contents play an important role in ensuring the quality.

What is G11N, I18N, L10N?

Before we jump to understand the Localization Testing, let’s go through a few terms such as ‘Internalization’, ‘Localization’ ‘Globalization’, which are often used interchangeably, but are different in meaning altogether.

Globalization(G11N) is the process by which businesses or other organizations develop international influence or start operating on an international scale. It is the overarching superset of ‘Internationalization’ and ‘Localization’ activities in the product, and encompasses the other business level activities such as Marketing, Legal, sales etc. to enable the appropriate end-user experiences worldwide.

Internationalization(I18N) is an engineering exercise focused on generalizing a product architecture such that it can handle multiple languages, scripts and regional conventions (currency, sorting rules, number and dates formats…) without the need for specific handling in the code.

Localization(L10N), on the other hand, is the process of adapting user interface of the product or service to a particular language, culture (Culturalization), and desired local “look-and-feel”. Translating the product’s user interface is just one step of the localization process. Resizing dialogs, buttons and palette tabs to accommodate longer translated strings is also part of localization.

The following illustration should help the reader develop a visual understanding of the correlation between these different terms:

Localization Testing

As the name states, this activity includes the verification and validation of the localization done for a software application. However, is it enough to test whether the text appears in the respective locales and translation? Does it ensure that the users of that language in that region will execute seamless workflows with your software? You got it right, the answer to these is a simple ‘No’, and there’s more to it.

There are many aspects of localization testing such as look and feel of the application, the language norms being followed, date/time, currency, use of greetings. One of the important ones is the context of the message being conveyed. Translations out of the context lead to inappropriate customer experiences. To cite an example, common words such as ‘book’ may have different meanings in different contexts, as in a ‘book’ to read or ‘book’ a table, or ‘book’ profits. Therefore, the localization quality engineer needs to be very sure to get the translations validated ‘in-context’ by a native of the respective region. Another important one is the regional norms and the evolving usage patterns of the local languages and the culture of the target market, which are covered as a part of Linguistic testing.

Let us try to understand the different types of testing that should be carried out during the Globalization phases.

  • Enablement Testing – This covers the aspect of handling the foreign text and data within the program, sorting according to the respective locale, importing and exporting text and data, correct handling of currency and date and time formats, fonts, string parsing, text search, upper and lower-case handling. This also covers the tests to validate the support for single byte/double-byte characters, implementation of different encodings (UTF8, UTF16, UNICODE, etc.). As the string (complete sentence) lengths differ in all the languages, therefore it is of utmost importance, that the UI framework is flexible enough to accommodate strings of variable, and often large lengths. For e.g. German is one language in which the string lengths are almost thrice of that of the English string. Defects are caught in such cases if the UI framework is hard coded and the text gets truncated. This testing should be performed early in the cycle, may be soon after the code gets freezed. This testing is important as it ensures that the product is ‘Locale Neutral’ and can be adapted in any regional form without any subsequent code changes in the core application.
  • Localizability Testing – This is done to identify potential issues that could hamper the localization of an application, may be a hard-coded string, UI issues. This may be done after or along with the enablement testing where testing is done on a pseudo/mock build. This is a specifically instrumented build that appends an identifier string with every string, using which all the hard-coded strings can be uncovered. When launching this instrumented build, the strings appear mocked such as shown in below image:

As shown in the illustration, above, characters are appended at the starting of the string – so if any string that has not been externalized, it wouldn’t appear mocked and can be caught from the user interface by testers.

  • Localization Functional Testing – As the name suggests it covers the functional tests in a localized environment. This unearths complications posed by differences in operating system handling across languages, and other dependencies with respect to directionality of text (RTL/LTR/Vertical) character sets, keyboards, IMEs, I/O operations, etc.
  • Linguistic Testing –As discussed earlier it covers the tests that validate the text in the user interface to be appearing correctly and completely, with no truncation, mistranslation, or misapplication. Overall language of the text should be tested with the context so that it conveys the right message. Cultural aspects should also be considered while performing such testing.
    • One may ask that why the culture is so important. Let us take an example of a Chinese restaurant, what a customer expects, is to have noodles served with chili sauce and chop sticks, right? But if it is served with olive oil and garlic bread, instead, would the customer be happy?  No, that’s exactly the case with the localized software, the software companies need to be sensitive about the cultural expectations of the customers. Similarly, if the marketing content is being localized, companies need to be sensitive about the attire the models (if any) are wearing in images, or words referring to some religious beliefs, may be while targeting the Middle Eastern market, for example. Therefore, images, content, colors, themes should be validated accordingly.
  • Keyboard input Testing – Apart from the above types of testing, this is another important testing that should be performed while the enablement testing and the functional testing phase. I’m calling out this testing separately as it is often missed out. During the enablement testing, input of single byte/double byte characters are tested while during the Functional testing phase, keyboard shortcuts are covered. Several customer defects may be reported, if such testing is skipped. The application cannot be just tested with copy-paste of different text in the input fields, but input data should be fed in through the actual keyboards. There is a plethora of keyboards available in the market and the interesting fact is that different regions have different key layouts. For e.g. The keys like ’?’,  ‘.’,  ‘/’,  ‘@’, ‘Alt’, ‘ctrl’ may be placed differently on different keyboards. While these are used to input the text, gives rise to incorrect display, if they are not implemented as well as tested properly.

Another important consideration that keeps pinging the internationalization quality engineer is the appropriate time, these testing should be done so that to make testing productive as well as cost effective. A step-by-step approach should be taken to monitor, control and assure the international quality of the products in an agile development environment. As discussed above Enablement as well as Localizability testing should be performed early in the cycle, during the I11N phase of Globalization and functional as well as the Linguistic Testing may be performed during the L10N phase.  The cost of fixing the bug goes higher if found later in the cycle, as shown in the below illustration.

To conclude, here are some best practices for delivering high quality internationalized software applications:

  1. Onboard a team that understands all, functionality, language & culture
  2. Plan testing such as to cover enablement along with keyboard input testing & localizability testing early in the cycle to uncover linguistic bugs to minimize the overall cost.
  3. Make sure to cover the culturalization aspects along with the functionality.

In today’s world, a product may be made Global but ensuring its quality is critical for its success, therefore, following the right approach and methodologies is important. The testing practices discussed above, form the backbone of the internationalization testing and quality.

About the author: Nidhi is a software quality professional with over 11 years of experience in the software industry and most recently has been working in the Globalization space at Adobe. She has been instrumental in managing the International quality of various Adobe products ranging from Creative Cloud to Adobe Elements. Not only is she an avid technology enthusiast and explorer, but also an active innovation evangelist.

Automation Journey in the world of L10n!!

Automation Journey in the world of L10n!!  

Feb’14, Reetika Ghai


The blog talks about the importance of automation in the world of localization and its increased need in the world of Agile

Paradigm Shift from Waterfall to Agile in the World of Localization

Do you know which is the fastest land animal in the world reaching speeds up to 113km/h?

Over the last two years, there’s been a gradual shift in the Software Development Life Cycle (SDLC) methodology for most of the Adobe flagship products. Product management has moved from yearlong waterfall product development life cycle to sprint-based Agile methodology (based on iterative and incremental development, where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams).

As a result of changing market trends, we need to reinvent our approach to localization testing to meet the changing requirements of Agile methodology. In Agile, development and test cycles are shorter with quick turnaround time. In localization, test volumes have spikes considering the duration of the sprint cycle of 2-3 weeks. Features require frequent validation across multiple locales and platforms before certifying for a release in a simultaneous release model (sim-GM). In the Agile framework, it’s important to be cognizant of the business goals from localization perspective. I would categorize these in three broad areas:

  1.   Time boxed frequent releases to market: With agile most of the Adobe products have at least one release every quarter to a frequency as high as weekly/monthly releases
  2. Increased test scope leading to increased localization efforts: With each sprint the legacy scope to certify LOC build increases
  3.  Higher focus on rapid new feature development with simultaneous release to market: Certifying features on N Locale and M platforms in a sprint of 3-4 weeks

These goals create the following challenges for an International Quality Engineering (IQE) team while deciding the scope on the localized builds for localization testing:

    • Ensuring increased test coverage on the new features while balancing the coverage for legacy feature areas
    • Ensuring re-usability of tests across various platform variants
    • Ensuring test accuracy across repetitive scenarios on multiple languages
    • Ensuring faster execution to uncover defects early on
    • Ensure all the features work as expected on all supported platforms and locales
    • Ensure co-existence with different versions of the released product/patches
    • Ensure shipping the product simultaneously in all supported locales across geographies (sim-GM)
    • Ensure optimized test coverage on all the supported locales and platform variants


Automation in Agile & Localization

Why automation testing in  Localization?

– Multiple Releases in an year

– High volume of testing

– Complexity of Platform: Locale combination

– Improved test coverage leading to better quality

– Scalability

 – Faster time to market

– Cost Effectiveness

 With these initial thoughts, we proposed to expand the automation coverage of Dreamweaver product features from English language to localized languages in September 2012. Our initial Goal was to attain 45% feature automation coverage on localized builds with respect to coverage on English build on Mac platform.

Gradually we built the feature automation capabilities in the next six months, starting from enabling the automation framework for localization (i.e., added support to the automation framework to run scripts on the localized operating system) to running daily smoke tests on all the 15 supported languages, and eventually having good feature level automation coverage.


Automation is a great way to overcome the above challenges and effectively achieve optimized test coverage on localization builds. With automation, it would be possible to certify incremental creative cloud releases for all the supported operating systems and language combinations supporting the time-bound releases.

With multiple releases to market in a year, manual execution of the repeatable test scope by the localization vendors leads to increased test efforts. The major part of the increased test effort can be attributed to incrementally increasing legacy test scope, i.e., legacy scope is cumulative the sum of all the deliverables in the past of a product and would increase with each sprint. On the other hand, automated tests can be run over and over again ensuring defined coverage across platforms and language combinations, thereby contributing to the overall product quality for the time boxed release. This not only eliminates the test redundancy but also helps in producing faster test results.

Having the legacy area automated would help the localization tester focus manually on the current sprint deliverable, hence uncover defects early in the test cycle.The IQE needs to be cautious in deciding the scope of automation on localized builds. Prioritizing the automation coverage is very important.

With each quarterly release to market, the certification scope of the legacy features set for a product is increasing, leading to amplified repeatable test effort across multiple sprints compared to one time validation in the yearly releases model

Legacy Automation Coverage

Journey into Dreamweaver Automation

DW Localization team has 88% Functional Coverage & 86.5% of conditional coverage w.r.t core coverage of 50% conditional and  functional coverage in CC release for MAC!

For adopting product automation in localized languages, our journey stared by answering a few of our initial questions:

  • What features do we need to automate?
  • What will be the sequence of feature automation?
  • What locales should we consider to start with, based on data from prerelease and bug history?
  • What would be the best approach for optimized test coverage in the different locales?
  • In the automation framework, where should the locale specific strings (used in Test scripts) be placed? Or should we pull the strings directly for comparison from the Adobe Localization Framework (ALF) at runtime?
  • How much effort is required for adopting automation on locales?
  • What would be the initial setup required to start automation in the different locales?
  • How much additional effort is required for running automation scripts in localized builds regularly?
  • What will be the hardware needs and the challenge to meet them?
  • What should be the frequency of automation runs (daily smoke, basic feature plan, and new feature plan)?
  • How to have the best execution turnaround time on all locales? What should be the optimization matrix considering fast turnaround time in agile?

Initial 6 months journey into adoption of automation framework for Dreamweaver localization

Time chart

Dreamweaver Automation Framework

Dreamweaver automation is based on the Adobe homegrown automation framework called ‘Jerry’. The framework was developed by the Dreamweaver English QE team. It was written in Core Java, supported by apple scripts and java scripts in the backend, making use of the Dreamweaver’s API exposed by the developers.

DW framework

The diagram depicts the automation workflow:

Step 1: A job ticket (contains configuration details like TC #, Platform, Machine details, language information etc.) is fed into the Pulpo server.

Step 2:  Pulpo server primary purpose is machine management and result logging. Pulpo server invokes the test machine based and executes the test automation based on the plan mentioned in the job ticket.

Step3: Once the execution is completed the log/results are copied to the Pulpo server for further analysis.

Step 4: Results are logged to the automation dashboard “AutoDash”

The Jerry framework contains automated test cases grouped under various test plans:

Daily Smokes – Basic test for validation of daily build

Basic Features Plan – Contains test cases of the legacy and new areas covering feature smoke in Test Studio

Acceptance Plan – Contains acceptance and full test pass coverage for features developed in legacy and present release cycle in Test Studio

We started with one iMAC machine dedicated to Dw automation. However, soon after proof of concept was successful, we added one more dedicated machine for automation on localized builds.  The above test plans got executed on a pre-scheduled basis across all 15 locales on the predefined execution plan. Job tickets distributed across 15 locales were fed to the Pulpo server either manually or automatically and were triggered on the arrival of new build in Codex. Typically, by the time we arrived at the office, build sanity was completed on all the locales and we were good to share the builds with our vendor partners.

For monitoring and optimization of test coverage across 15 languages, a dedicated execution calendar was followed. Based on the calendar, different automation test plans were executed on various locales/platform combinations on a daily basis. Daily smoke test for build validation were executed, followed by dedicated full feature test pass on the weekends. The execution was pre-scheduled and the test coverage was distributed across locales for optimal results given the time and machine constraints.

Accomplishments & Learnings


In the Creative Cloud (CC) release, we benefitted from having automated test passes on localized builds across 15 languages:

  • Overall test coverage efficiency improved four folds compared to manual test execution 
  • Quick sanity test for acceptance of localization build before passing the build to vendor partners  increased efficiency
  • Achieved quick turnaround time for basic feature testing by automation scripts
  • Parallel certification on multiple builds (Patch and Main line builds)
  • More focus on new features development part of the current sprint by the localization functional testers
  • Prerelease build certification completely through automation
  • Built blueprint and Code Sign verification through automation on all locales in 2 hours compared to 32 hours of manual verification


  • Support from the core team: It is essential to have the automation blessed from the English team for optimal support. In case of Dreamweaver, we got immense support from the team, especially from Kiran Patil (Quality Manager) and Arun Kaza (Sr. Quality Lead) for driving the automation efforts on localized builds
  • Pilot on automation framework for one Tier 1/double-byte/Cyrillic locales to ensure the framework was robust and would support automation on most of the locales
  • Always document the issues /challenges you face during setting up automation, they always act as a reference point later
  • Ensure core scripts are independent of English strings. In Dreamweaver, updating the legacy automation scripts to make these scripts run on localization was a big challenge, as automation scripts were failing at string comparisons. Aishvarya Suhane (Localization Automation QE) was a great help for writing functions in automation framework and creating a few new scripts for resolving localization-specific issues.

Cheetahs in the world of localization …

Special thanks to Guta Ribeiro for inspiring & mentoring me to write my first Blog & Rakesh Lal for his support.