Why ADP’s Next-Gen HCM Is A Disruptive Force In HR Technology

Well here it is a year later, and it looks like ADP has done it. The company’s next-generation HCM and payroll system is now available, and could become one of the more disruptive systems on the market. While the system is still young, it sets a technical direction for Workday, SAP, Oracle, and others.

How The HR Software Market Has Changed

Let me briefly discuss how the HR software market has changed. Core Human Capital systems are a large, growing and important market. Once considered the “system of record” for employees, they are now used by every company as a way to keep track of people’s jobs and work, plan and facilitate careers, and make sure people are paid correctly. 

Now, they are changing again. 

Today’s HCM platforms are no longer just systems of record, they are systems to make employees’ work lives better. They have to support many organization models (hierarchy, teams, projects, contractors, gig workers); they have to address many forms of reward and pay (salary, hourly, by the project, by output); and they have to be open to many third-party applications. For more info Workday training

Organizations now function as talent networks, not hierarchies. 34% of companies tell us they operate as a network (up from 6% in 2016), and more than 88% of companies tell me they want a better technology to manage gig and contract work. Zappos, Schneider Electric, Unilever and many others now manage themselves as “talent marketplaces,” encouraging people to play roles in multiple teams around the world.

And these new HCM platforms are not just “applications,” but rather micro-services platforms where applications run. Some of the most innovative apps in HR now come from third parties. HCM vendors simply cannot build everything themselves. I now think of core HCM as “application ecosystems,” more like the i-Phone than like Quickbooks.

Moreover, these systems have be designed around “experiences” not “processes.” The word Experience is now the biggest buzzword in HR, and it is profoundly changing the way software is developed. It’s no longer sufficient to build forms, tabs, and buttons for users: now we have to build systems that adapt to our needs, listen to our voice, change based on our data, and can be configured in many ways.

(Both SuccessFactors and Workday are just launching Experience Layers on top of their systems to address this.)

Finally, the HCM system of the future has to be an employee productivity tool, not jus an HR tool. It isn’t designed for HR anymore, it must be designed for employees and managers. The system should be useful, simple to use, and must interface with Microsoft Teams, Slack, WhatsApp, and all the other various collaboration tools we use at work.

In short, this is a whole new world – and it requires a new architecture, new user experience, and new technology stack.

ADP, An Unexpected Tech Leader

This industry is not for the faint of heart. Building an enterprise platform takes years, and once you start you’re stuck with the architecture you start with.

Workday’s architecture is fourteen years old and quite innovative, it feels proprietary. SuccessFactors is similar in age and is now being re-engineered around SAP Hana and a new Experience interface. Oracle recently re-engineered its HCM platform and it took almost five years. So when a company like ADP starts from scratch, it can upset the apple cart.

While many customers rushed to buy cloud-based HCM systems, their satisfaction has been mixed. The platforms are highly complex, they don’t accommodate new organization and performance models, and buyers want more innovation.  HR departments want a stable, reliable HCM platform but they also want to be able to mix and match the best of breed on top.

Today, using what is called “cloud-native” systems, vendors can build modern applications faster than ever. And technologies like AI, cognitive interfaces, natural language processing, and graph database are readily available from Amazon Web Services, Google Cloud, or Microsoft.

Enter ADP.

ADP you say? Aren’t they a 70-year-old payroll company?  What are they doing in the cloud architecture business?

Well yes, ADP does pay more than 40 million people in the US (one in six). But behind the scenes, the company is filled with technologists, and its new Lifion group has assembled some of the most senior tech architects in the world.

As Carlos Rodriguez the CEO and Don Weinstein the head of Global Product and Technology put it, ADP used to be a “services company fueled by technology.” Now it is becoming “a technology company with great services.” In other words, the company has heavily invested in its platform.

The new platform, today called ADP Next Gen HCM (a real name will come), has the architecture other vendors only talk about, and as it picks up speed it could become a major disruptor in the market.

What Is ADP Next Gen HCM?

Let me explain what ADP has done.

Through a skunk-works development team in Chelsea, NY, the company has been rewriting its payroll engine and HCM platform for several years. The project, originally called Lifion, is a “cloud-native” platform which embraces the latest technology stack needed to scale for the future. 

“Cloud-native” simply means it’s built on the newest, containerized services, leveraging the latest technology in the cloud. This means the system is made up of many micro-apps, it uses low-code development, it leverages graph and SQL databases, and it never goes down for maintenance.

Let me give you some specifics.

  • ADP’s new architecture is designed around teams, not hierarchies, so it has capabilities to manage the future of work. You can create teams of any type in the system, and then include any type of worker in a team (full time, part-time, contingent).  Teams inherit the hierarchical attributes of people (ie. who they report to) but also attribute them to the team. (Imagine a project team working on a new product, a safety team, and even an employee resource group.) 
  • Unlike other HCM systems, each Team is an entity in itself, with its own business rules, apps, and measurement systems.  You could have one team that uses an OKR goal application, another that uses a different survey tool. Teams are essentially the “grain” of the architecture, not the hierarchy. This is only possible because the system uses a Graph Database.  Graph database technology models data as relationships, not rows and columns. (It’s the tech under Facebook and Google.) It has immense potential in organizations today.
  • The system is designed for “micro-apps and micro-services.” This means ADP can quickly build new applications easily, plug third party applications into the system, and open up the system for users and consultants to build apps. Think of the ADP Platform as a giant i-Phone: you can plug in any app and inherit all the data and security you’ve already built.  You can assign apps to teams, so some teams can use one type of goal setting, another can use other features, and so on. 
  • The development environment is “low-code,” meaning you can build apps in a visual tool. This means ADP and partners can extend the system easily, creating a flexible non-proprietary model to grow and expand.
  • ADP’s system is mobile-first and visually simple. The system uses a consumer-like interface (similar to Google), and seems very easy to use. Workday, which originally built a very innovative user interface, is feeling its age, and plans a major upgrade this Fall. SuccessFactors new HXM interface (Human Experience Management) is also a major push in this direction.
  • ADP’s AI engine is useful out of the box. And that’s not just because it uses AI, it’s because ADP has so much data. ADP houses more data about workers and jobs than any other company in the world, so if you want to know if your people are underpaid or if your retention is out of line, ADP has benchmarks you can use. The AI-based intelligence application delivers suggestions and recommendations on hundreds of talent issues, all in a “narrative intelligence” interface. 
  • Just to let you know how much data the company has, ADP has more than 800,000 customers and a skills-cloud with more than 30 million employees’ job descriptions embedded. 
  • ADP’s talent applications are also coming along. Clients sometimes complain about various parts of ADP’s recruitment or learning software, but StandOut, ADP’s next-generation engagement, goal, performance management, and team coaching system is a very competitive product. It is integrated into Next Gen HCM so it can be deployed immediately to any or all teams. The product has been highly successful in ADP, driving a 6% improvement in engagement and a 12% increase in sales productivity. 97% of ADP associates have completed the StandOut assessment, an aspiration most companies would dream of. (Cisco is also a big fan.)
  • ADP’s Next Gen Payroll engine, coupled with the company’s acquisition of Celegro, uses a reusable rules engine to greatly reduce the complexity of payroll. Payroll is a complex business operation filled with lots of special rules. The Next Gen payroll system is designed to be “rules driven.” Microsoft uses ADP’s Global Payroll and has reduced the number of global payroll administrators from 400 to a handful of payroll SMEs across the globe.
  • ADP’s new payment system is redesigned for real-time pay (the payroll engine computes all gross-to-net and deductions in real-time). This lets companies pay employees and contractors more frequently.
  • ADP’s Wisely system, the company’s smart payment app, is gaining more than 250,000 new members per month, making it one of the fastest-growing payment systems in the market. (Wisely lets you allocate pay to different categories, automatically create various forms of savings accounts, and use credit/debit and other pay methods right from your payroll.)

To get in-depth knowledge, enroll for a live free demo on Workday Online Training

ETL What it is and why it matters

ETL is a type of data integration that refers to the three steps (extract, transform, load) used to blend data from multiple sources. It’s often used to build a data warehouse. During this process, data is taken (extracted) from a source system, converted (transformed) into a format that can be analyzed, and stored (loaded) into a data warehouse or other system. Extract, load, transform (ELT) is an alternate but related approach designed to push processing down to the database for improved performance.

ETL History

ETL gained popularity in the 1970s when organizations began using multiple data repositories, or databases, to store different types of business information. The need to integrate data that was spread across these databases grew quickly. ETL became the standard method for taking data from disparate sources and transforming it before loading it to a target source, or destination. For more info ETL Testing online Training

In the late 1980s and early 1990s, data warehouses came onto the scene. A distinct type of database, data warehouses provided integrated access to data from multiple systems – mainframe computers, minicomputers, personal computers and spreadsheets. But different departments often chose different ETL tools to use with different data warehouses. Coupled with mergers and acquisitions, many organizations wound up with several different ETL solutions that were not integrated.

Over time, the number of data formats, sources and systems has expanded tremendously. Extract, transform, load is now just one of several methods organizations use to collect, import and process data. ETL and ELT are both important parts of an organization’s broader data integration strategy.

Why ETL Is Important

Businesses have relied on the ETL process for many years to get a consolidated view of the data that drives better business decisions. Today, this method of integrating data from multiple systems and sources is still a core component of an organization’s data integration toolbox. Learn ETL Testing Course

Extract Transfrom Load - infographic

ETL is used to move and transform data from many different sources and load it into various targets, like Hadoop.

  • When used with an enterprise data warehouse (data at rest), ETL provides deep historical context for the business.
  • By providing a consolidated view, ETL makes it easier for business users to analyze and report on data relevant to their initiatives.
  • ETL can improve data professionals’ productivity because it codifies and reuses processes that move data without requiring technical skills to write code or scripts.
  • ETL has evolved over time to support emerging integration requirements for things like streaming data.
  • Organizations need both ETL and ELT to bring data together, maintain accuracy and provide the auditing typically required for data warehousing, reporting and analytics. 

To get in-depth knowledge, enroll for a live free demo on ETL Testing Training

How to get data from Workday in SSIS using SOAP/REST API

How to call Workday API in SSIS (Read or Write Data)

Here are high level steps to read or write Workday data in SSIS.

  1. Obtain Workday WSDL URL (Service Metadata) and API URL for your tenant
  2. Craft POST request XML using tool like SoapUI for desired operation (e.g Get_Employee )
  3. Configure SSIS HTTP Connection (for SOAP WSS) using API url and workday Userid / password.
  4. Call Workday API using any of these tasks or components SSIS XML Source or SSIS REST API TASK or SSIS Web API Destination to read / write data.

Now lets look at each step in detail in the following sectionsNOTE: If you are trying to get data from Workday report instead (your Workday admin created a report and gave you a link) then skip Soap UI part, use the URL with GET method and Basic authentication instead of SOAP WSS. For more info Workday Integration Training

Obtain Workday SOAP WSDL URL (API Metadata URL)

First step to consume workday api using SSIS is download SOAP WSDL file. WSDL is XML file which describes available API operations and structure of request and response. Here is the list of available WSDL for various Workday API Services. Right click on WSDL icon and save to local disk. We will use this WSDL in next section to craft SOAP Request using SoapUI tool

You can find more information about Other Workday online Training

Obtain Workday API URL

Once you have WSDL file, next step is craft correct URL for API service you like to call. Service name can be obtained from (Check service column)

Syntax: https://<workday host name>.workday.com/ccx/service/<tenant name>/<service-name>
Example:
 https://MY-INSTANCE.workday.com/ccx/service/MY-TenantID/Human_Resources

Craft SOAP Body (XML API Request) using SoapUI

Now its time to craft some SOAP Request. Check steps outlined here (Use SoapUI tool) . Once you have Request Body XML you can change parameters as per your need.

Here is sample  SOAP XML Body for Get Employee call from Human_Resopurces service.

123456789101112<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/&#8221; xmlns:bsvc=”urn:com.workday/bsvc”>   <soapenv:Header/>   <soapenv:Body>      <bsvc:Employee_Get>         <bsvc:Employee_Reference>            <bsvc:Integration_ID_Reference>               <bsvc:ID>gero et</bsvc:ID>            </bsvc:Integration_ID_Reference>         </bsvc:Employee_Reference>      </bsvc:Employee_Get>   </soapenv:Body></soapenv:Envelope>

Create SOAP Request Body from WSDL (Using SoapUI tool)

Creating SSIS Connection for Workday SOAP API call using WSS Security

To create new connection for workday perform the following steps.

  1. Two ways you can create HTTP connection for workday
    First approach: Right click in the connection managers panel and click “New Connection…” and Select ZS-HTTP connection from the connection type list and click OK.
    — OR —
    Second approach: If you are already on SSIS XML Source or SSIS REST API TASK or SSIS Web API Destination UI then click [New] next to the Connection Dropdown.
  2. Once HTTP Connection UI is visible configure following way.
    1. Enter API URL for Workday (Make sure you don’t enter WSDL URL . Your API URL will be something like below.
      Syntax: https://<workday host name>.workday.com/ccx/service/<tenant name>/<service-name>
      Example:
       https://wd1-impl-services1.workday.com/ccx/service/MyTenantID/Human_Resources
    2. Select credential type as SOAP WSS (This setting is only found in v2.6.4 or Higher)
    3. Enter your workday userid and password
    4. For WSS password type setting you leave it default (Not set) or change to PasswordHash for more secure communication.
    5. Click OK to save.
SSIS Workday Integration - Create New SAOP Service Connection for Workday API Service (SOAP WSS)

To get in-Depth knowledge, enroll for a live free demo on Workday Training

Sqoop vs. Flume Battle of the Hadoop ETL tools

Apache Hadoop is synonymous with big data for its cost-effectiveness and its attribute of scalability for processing petabytes of data. Data analysis using hadoop is just half the battle won. Getting data into the Hadoop cluster plays a critical role in any big data deployment. Data ingestion is important in any big data project because the volume of data is generally in petabytes or exabytes.

Five Steps to Running ETL on Hadoop for Web Companies

Hadoop Sqoop and Hadoop Flume are the two tools in Hadoop which is used to gather data from different sources and load them into HDFS. Sqoop in Hadoop is mostly used to extract structured data from databases like Teradata, Oracle, etc., and Flume in Hadoop is used to sources data which is stored in various sources like and deals mostly with unstructured data.

If you would like more information about Big Data careers, please click the orange “Request Info” button on top of this page.

Big data systems are popular for processing huge amounts of unstructured data from multiple data sources. The complexity of the big data system increases with each data source. Most of the business domains have different data types like marketing genes in healthcare, audio and video systems, telecom CDR, and social media. All these have diverse data sources and data from these sources is consistently produced on large scale. Get more skills from ETL Testing Course

The challenge is to leverage the resources available and manage the consistency of data. Data ingestion is complex in hadoop because processing is done in batch, stream or in real time which increases the management and complexity of data. Some of the common challenges with data ingestion in Hadoop are parallel processing, data quality, machine data on a higher scale of several gigabytes per minute, multiple source ingestion, real-time ingestion and scalability.

Apache Sqoop and Apache Flume are two popular open source etl tools for hadoop that help organizations overcome the challenges encountered in data ingestion. If you are looking to find the answer to the question –“What’s the difference between Flume and Sqoop?” then you are on the right page.

The major difference between Sqoop and Flume is that Sqoop is used for loading data from relational databases into HDFS while Flume is used to capture a stream of moving data.

What is Sqoop in Hadoop?

Apache Sqoop (SQL-to-Hadoop) is a lifesaver for anyone who is experiencing difficulties in moving data from the data warehouse into the Hadoop environment. Apache Sqoop is an effective hadoop tool used for importing data from RDBMS’s like MySQL, Oracle, etc. into HBase, Hive or HDFS. Sqoop hadoop can also be used for exporting data from HDFS into RDBMS. Apache Sqoop is a command line interpreter i.e. the Sqoop commands are executed one at a time by the interpreter.

Need for Apache Sqoop

With increasing number of business organizations adopting Hadoop to analyse huge amounts of structured or unstructured data, there is a need for them to transfer petabytes or exabytes of data between their existing relational databases, data sources, data warehouses and the Hadoop environment. Accessing huge amounts of unstructured data directly from MapReduce applications running on large Hadoop clusters or loading it from production systems is a complex task because data transfer using scripts is often not effective and time consuming.

How Apache Sqoop works?

Sqoop is an effective hadoop tool for non-programmers which functions by looking at the databases that need to be imported and choosing a relevant import function for the source data. Once the input is recognized by Sqoop hadoop, the metadata for the table is read and a class definition is created for the input requirements. Hadoop Sqoop can be forced to function selectively by just getting the columns needed before input instead of importing the entire input and looking for the data in it.

This saves considerable amount of time. In reality, the import from the database to HDFS is accomplished by a MapReduce job that is created in the background by Apache Sqoop. For more info ETL Testing Training

Features of Apache Sqoop

  • Apache Sqoop supports bulk import i.e. it can import the complete database or individual tables into HDFS. The files will be stored in the HDFS file system and the data in built-in directories.
  • Sqoop parallelizes data transfer for optimal system utilization and fast performance.
  • Apache Sqoop provides direct input i.e. it can map relational databases and import directly into HBase and Hive.
  • Sqoop makes data analysis efficient.
  • Sqoop helps in mitigating the excessive loads to external systems.
  • Sqoop provides data interaction programmatically by generating Java classes.

What is Flume in Hadoop?

Apache Flume is service designed for streaming logs into Hadoop environment. Flume is a distributed and reliable service for collecting and aggregating huge amounts of log data. With a simple and easy to use architecture based on streaming data flows, it also has tunable reliability mechanisms and several recovery and failover mechanisms.

Need for Flume

Logs are usually a source of stress and argument in most of the big data companies. Logs are one of the most painful resources to manage for the operations team as they take up huge amount of space. Logs are rarely present at places on the disk where someone in the company can make effective use of them or hadoop developers can access them. Many big data companies wind up building tools and processes to collect logs from application servers, transfer them to some repository so that they can control the lifecycle without consuming unnecessary disk space.

This frustrates developers as the logs are often not present at the location where they can view them easily, they have limited number of tools available for processing logs and have confined capabilities in intelligently managing the lifecycle. Apache Flume is designed to address the difficulties of both operations group and developers by providing them an easy to use tool that can push logs from bunch of applications servers to various repositories via a highly configurable agent.

How Apache Flume works?

Flume has a simple event driven pipeline architecture with 3 important roles-Source, Channel and Sink.

  • Source defines where the data is coming from, for instance a message queue or a file.
  • Sinks defined the destination of the data pipelined from various sources.
  • Channels are pipes which establish connect between sources and sinks.

Apache flume works on two important concepts-

  1. The master acts like a reliable configuration service which is used by nodes for retrieving their configuration.
  2. If the configuration for a particular node changes on the master then it will dynamically be updated by the master.

Node is generally an event pipe in Hadoop Flume which reads from the source and writes to the Sink. The characteristics and role of a flume node is determine by the behaviour of source and sinks. Apache Flume is built with several source and sink options but if none of them fits in your requirements then developers can write their own. A flume node can also be configured with the help of a sink decorator which can interpret the event and transforms it as it passes through. With all these basic primitives, developers can create different topologies to collect data on any application server and direct it to any log repository.

To get in-depth knowledge, enroll for a live free demo on ETL Testing Online Training

How To Perform ETL Testing Using Informatica PowerCenter Tool

It is a known fact that ETL testing is one of the crucial aspects of any Business Intelligence (BI) based application. In order to get the quality assurance and acceptance to go live in business, the BI application should be tested well beforehand.

The primary objective of ETL testing is to ensure that the Extract, Transform & Load functionality is working as per the business requirements and in sync with the performance standards.

What you will learn in this ETL tutorial:

  • Basics of ETL, Informatica & ETL testing.
  • Understanding ETL testing specific to Informatica.
  • Classification of ETL testing in Informatica.
  • Sample test cases for Informatica ETL testing.
  • Benefits of using Informatica as an ETL Training
  • Tips & Tricks to aid you in testing.

In computing, Extract, Transform, Load (ETL) refers to a process in database usage and especially in data warehousing that performs:

  • Data extraction – Extracts data from homogeneous or heterogeneous data sources.
  • Data Transformation – Formats the data into required type.
  • Data Load – Move and store the data to a permanent location for long term usage.

Informatica PowerCenter ETL Testing Tool:

Informatica PowerCenter is a powerful ETL tool from Informatica Corporation. It is a single, unified enterprise data integration platform for accessing, discovering, and integrating data from virtually any business system, in any

It is a single, unified enterprise data integration platform for accessing, discovering, and integrating data from virtually any business system, in any format and delivering that data throughout the enterprise at any speed. Through Informatica PowerCenter, we create workflows that perform end to end ETL operations.

Download and Install Informatica PowerCenter:

To install and configure Informatica PowerCenter 9.x use the below link that has step by step instructions:
=> Informatica PowerCenter 9 Installation and Configuration Guide

Understanding ETL testing specific to Informatica:

ETL testers often have pertinent questions about what to test in Informatica and how much test coverage is needed?

Let me take you through a tour on how to perform ETL testing specific to Informatica.

The main aspects which should be essentially covered in Informatica ETL testing are:

  • Testing the functionality of Informatica workflow and its components; all the transformations used in the underlying mappings.
  • To check the data completeness (i.e. ensuring if the projected data is getting loaded to the target without any truncation and data loss),
  • Verifying if the data is getting loaded to the target within estimated time limits (i.e. evaluating performance of the workflow),
  • Ensuring that the workflow does not allow any invalid or unwanted data to be loaded in the target. Get more skills from ETL Testing Certification

Classification of ETL Testing in Informatica:

For better understanding and ease of the tester, ETL testing in Informatica can be divided into two main parts –

#1) High-level testing
#2) Detailed testing

Firstly, in the high-level testing:

  • You can check if the Informatica workflow and related objects are valid or not.
  • Verify if the workflow is getting completed successfully on running.
  • Confirm if all the required sessions/tasks are being executed in the workflow.
  • Validate if the data is getting loaded to the desired target directory and with the expected filename (in case the workflow is creating a file), etc.

In a nutshell, you can say that the high-level testing includes all the basic sanity checks.

Coming to the next part i.e. detailed testing in Informatica, you will be going in depth to validate if the logic implemented in Informatica is working as expected in terms of its results and performance.

  • You need to do the output data validations at the field level which will confirm that each transformation is operating fine
  • Verify if the record count at each level of processing and finally if the target is as expected.
  • Monitor thoroughly elements like source qualifier and target in source/target statistics of session
  • Ensure that the run duration of the Informatica workflow is at par with the estimated run time.

To sum up, we can say that the detailed testing includes a rigorous end to end validation of Informatica workflow and the related flow of data.

Let us take an example here:

We have a flat file that contains data about different products. It stores details like the name of the product, its description, category, date of expiry, price, etc.

My requirement is to fetch each product record from the file, generate a unique product id corresponding to each record and load it into the target database table. I also need to suppress those products which either belong to the category ‘C’ or whose expiry date is less than the current date.

Say, my flat file (source) looks like this:

(Note: Click on any image for enlarged view)

flat-file

Based on my requirements stated above, my database table (Target) should look like this: Learn more from ETL Testing Course

Table name: Tbl_Product

Prod_ID (Primary Key)Product_nameProd_descriptionProd_categoryProd_expiry_dateProd_price
1001ABCThis is product ABC.M8/14/2017150
1002DEFThis is product DEF.S6/10/2018700
1003PQRSThis is product PQRS.M5/23/20191500

Now, say, we have developed an Informatica workflow to get the solution for my ETL requirements.

The underlying Informatica mapping will read data from the flat file, pass the data through a router transformation that will discard rows which either have product category as ‘C’ or expiry date, then I will be using a sequence generate to create the unique primary key values for Prod_ID column in Product Table.

Finally, the records will be loaded to Product table which is the target for my Informatica mapping.

Examples:

Below are the sample test cases for the scenario explained above.

You can use these test cases as a template in your Informatica testing project and add/remove similar test cases depending upon the functionality of your workflow.

#1) Test Case ID: T001

Test Case Purpose: Validate workflow – [workflow_name]

Test Procedure:

  • Go to workflow manager
  • Open workflow
  • Workflows menu-> click on validate

Input Value/Test Data: Sources and targets are available and connected
Sources: [all source instances name]
Mappings: [all mappings name]
Targets: [all target instances name]
Session: [all sessions name]

Expected Results: Message in workflow manager status bar: “Workflow [workflow_name] is valid “

Actual Results: Message in workflow manager status bar: “Workflow [workflow_name] is valid “

Remarks: Pass

Tester Comments:

#2) Test Case ID: T002

Test Case Purpose: To ensure if the workflow is running successfully

Test Procedure:

  • Go to workflow manager
  • Open workflow
  • Right click in workflow designer and select Start workflow
  • Check status in Workflow Monitor

Input Value/Test Data: Same as test data for T001

Expected Results: Message in the output window in Workflow manager: Task Update: [workflow_name] (Succeeded)

Actual Results: Message in the output window in Workflow manager: Task Update: [workflow_name] (Succeeded)

Remarks: Pass

Tester Comments: Workflow succeeded

Note: You can easily see the workflow run status (failed/succeeded) in Workflow monitor as shown in below example. Once the workflow will be completed, the status will reflect automatically in workflow monitor.

To get in-depth knowledge, enroll for a live free demo on ETL Testing Online Training

WHY INFORMATICA SHOULD BE USED AS AN ETL TOOL OVER TERADATA?

For managing databases, ETL means three different functions, i.e. Extract, Transform, Load. ETL becomes a combined programming tool for obtaining temporary data subsets for reports, as well as more permanent data for other purposes. Being a highly important function in database management, ETL is achieved by using different tools and programs.

Informatica ETL: Beginner's Guide | Informatica Tutorial | Edureka
Informatica ETL: Beginner's Guide | Informatica Tutorial | Edureka

Benefits of Informatica and why it should be used over Teradata as an ETL tool:

  • Very robust platform, easy to learn with no programming
  • Informatica creates an ecosystem with is quicker for analysts to perform different analysis, and is much easier to maintain
  • Informatica Workflow Monitor helps in easy, simple and quick job monitoring and recovery For more info ETL Testing Certification
  • Create faster SDLC and enhance application support with availability of extensive accelerators and tools through Informatica MarketPlace
  • Informatica provides extensive support connection to several databases, which includes the regular ODBC drivers, tPump, Teradata mLoad, Parallel Transporter, as well as FastLoad
  • Informatica provides much faster processing for surrogate key generation through share sequence generators. This process is a much slower when performed inside the database
  • Database information can be easily processed through pushdown optimization
  • Informatica has the ability to publish the database process as web services, conveniently, easily and speedily
  • Informatica helps to balance the load between database box & ETL server, with coding capability. Beneficial for performing certain tasks when the server has a fast disk For more info ETL Testing Training
  • Informatica enables the easy execution of migrating projects, from one database to another, like from Teradata to another, by changing the ETL code providing automated solution in an efficient and accurate manner

To get in-depth knowledge, enroll for a live free demo on ETL Testing Online Training

How to build OpenStack on Kubernetes

Workday’s OpenStack journey has been a fast one: from no cloud to five distributed data centers, over a thousand servers and 50,000 cores in four years.

Workday "Creating an Effective Developer Experience on Kubernetes"

The team has done much of the work themselves. At the Sydney Summit, Edgar Magana and Imtiaz Chowdhury, both of Workday, gave a talk about how they work with Kubernetes to achieve zero downtime for large-scale production software-as-a-service.

“We’re very happy about what we’ve achieved with OpenStack,” says Magana, who has been involved with OpenStack since 2011 and currently chairs the User Committee.

As deploying Openstack services on containers becomes more popular and simpler, operators are jumping on the container bandwagon. Get workday Training

However, although many open source and paid solutions are available, few offer the options to customize OpenStack deployment to meet requirements for security, business and ops.

“Everything we’ve done so far, it’s fully automated and we do it in a way that developers can make changes and deploy it all the way to production after getting it tested,” says Chowdhury. “We also want to make sure we can upgrade and maintain with zero downtime.”

Magana gave an overview of the current architecture at Workday, noting that “We’re not doing  anything crazy, we have a typical reference architecture for OpenStack, but we have made a few changes.”

Where you’d normally have the OpenStack controller with all the OpenStack projects (Keystone, Nova, etc.) Workday decided to abstract out the stateful services and build out what they call an “infrat server.” Learn from Workday Online Course

Coming at it from an ops perspective, they go into of toperationalizing production-grade deployment of Openstack on Kubernetes in a data center using community solutions.

They cover:

  • How to build a continuous integration and deployment pipeline for creating container images using Kolla
  • How to harden OpenStack service configuration with Openstack-Helm to meet enterprise security, logging and monitoring requirements

To get in-depth knowledge enroll for a live free demo on Workday Online training

Introduction to Workday Integration

Introduction to Workday Integration

Integrations are crucial when communicating external systems with Workday. To be able to say you master them means that you understand how data can be imported to or exported from Workday. Thankfully, there are predefined rules of how that data must look like, and public interfaces you can access anytime you need to build an integration.

Workday provides single-architecture, cloud-based enterprise application and management suites that combine finance,HR,and analytics into a single system.Workday Integration is designed to balance high-security standards,agile updates,powerful insights,and intuitive UI across devices.

What are the main types of Workday Integrations and how do you select which solution to use?

  • The main types of integrations are Workday Studio Integration, Enterprise Information Builder (EIB) Integration and Cloud Connect Integration.
  • When deciding over which tool to use we need to take into account some factors; whenever you take in the design of an integration, from the requirements you should follow a roadmap, here is an example of it:
  1. Is the solution already pre-built? Am I connecting to a third-party vendor with a solution already in place? – If yes, most likely you will choose a Core Connector.
  2. Does this integration just need to export or import some data into Workday? – If yes, then most likely you need to go with EIBs.
  3. Do I need to execute several rules, and reports to get the data and calculate the results I need? For example: Determining payroll between Exempt and Non-Exempt employees, calculate deductions, etc. – If yes, then most likely you need a Workday Studio Integration. Learn from Workday Integration Course

This technical document examines Workday’s Core Connectors and Documentation Transformation technology which provide pre-built templates that allow developers to implement integration systems. Core Connectors address the majority of the effort to integrate with third-party endpoints. They can be implemented as delivered or can provide the foundation upon to which to extend this functionality leveraging Workday’s Integration Cloud Platform. Core Connector usage provides a rapid, flexible and re-usable method for integrating with Workday, ensuring that external systems receive only the data that you want to expose.

Document Transformation templates incorporate XSLT code providing the developer the capability to transform both data structure and content of the XML document to meet client requirements. As part of Document Transformation this class will cover Workday specific processing instructions known as Element Transformation and Validation (ETV) and XML TText (XTT). Get more skills form Workday Integration Online Training

Workday Connectors are currently available for:

  • Benefits 
  • HCM 
  • Workday Payroll 
  • Third-Party Payroll 
  • Financials 
  • Spend Management 
Workday Connectors
  • Outbound Integration: Another system is target
  • Inbound Integration: Workday is target for receiving data
Steps to building a connector integration

Integration systems are tenanted definitions of a Workday integration. An integration system has the following building blocks:

  • Integration Template Establishes the framework for data exchange through a collection of integration services.
  • Integration Service Contains a set of attributes and maps related to a specific integration function. Integration services use XSLT (eXtensible Stylesheet Language Transformations) to convert Workday XML into a format that an external system can read.
  • Integration Attribute Provides one or more tenanted values for a data element in Workday.
  • Integration Map Defines relationships between Workday values and external system values. Examples are maps for benefit coverage levels, marital status, gender, job classifications, and locations.
  • Transaction Log Provides a record of business processes and events in Workday. Integrations can subscribe to specific transaction log events to capture the changes to employee data that are relevant to an external system.

Workday Integration Cloud

Workday’s Integration Cloud Platform is a complete Integration Platform as-a–Service (iPaaS) for building, deploying, and managing integrations to and from Workday. It provides a proven, enterprise-class platform that consists of an Enterprise Service Bus (ESB) embedded as part of the Workday platform with associated tools directly within the Workday UI for managing and monitoring integrations. The Workday Integration Cloud also provides pre-built and delivered connections to non-Workday systems, as well as tools for developing custom integrations. All integrations are deployed to and run on Workday without the need for any on-premise middleware.

Workday integration cloud

Advantages of using Workday for Integration

Clearly, you can build any integration you need to the Workday API using your own middleware technology; MuleSoft, Boomi, TIBCO, or Oracle Fusion Middleware are just a few of the middleware tools used by Workday customers. However, there are several major advantages to using the Workday Integration Cloud:

-Integrations surface naturally inside the Workday user interface. You can view the integrations, launch them, schedule them, secure them, include them in Workday business processes, configure notifications around them, and audit and log them— all from within the Workday user experience.

-Both packaged and custom integrations run on Workday software and hardware infrastructure in our data centers. You do not need to license or use any on-premise integration middleware platform, which can greatly simplify the deployment and management of integrations especially when the majority of the integrations are connecting to Workday

-Finally Workday’s integration tools are also highly optimized for efficiently building integrations to and from Workday. Purpose-built packaged components handle much of the plumbing aspects of integration-building, freeing you to focus on the critical business logic.

Overall, Workday’s packaged integrations and tools are widely proven in a variety of demanding situations and offer a lower-cost, lower-risk path to delivering needed integrations in support of your deployment.

To get in-depth knowledge, enroll for a live free demo on Workday Integration Training

Load SQL Server data to Workday using SSIS / SOAP API

Step-By-Step : Import SQL Server data to Workday using SSIS

Lets build our SSIS Package to load data from SQL Server or any Source (e.g. Oracle, DB2) to Workday using SOAP API calls. Using below approach you can Create new records or Update existing records in Workday. You can also delete records with same concept.

Template Transform - Create Workday SOAP Request - Create new records using SSIS

Basic steps outlined below.

  1. Fetch records from SQL Source and build XML Request for each Row (e.g. Create Account)
  2. Build XML Request (SOAP Body) using Template Transform or  SSIS XML Generator Transform. If you have Array nodes (e.g. One to Many) then you have to use SSIS XML Generator Transform else use Template Transform for ease of use.
  3. Pass input record (e.g. SOAP Body) to Web API destination to call Workday API call (CREATE, UPDATE, DELETE requests)
  4. Parse XML Response (i.e. output) using SSIS XML Parser Transform or save raw XML to SQL Server database.
  5. Redirect Bad rows or failed requests to log file for review from Workday Course

Obtain Workday API URL

Once you have WSDL file, next step is craft correct URL for API service you like to call.

Syntax: https://<workday host name>.workday.com/ccx/service/<tenant name>/<service-name>
Example:
 https://MY-INSTANCE.workday.com/ccx/service/MY-TenantID/Human_Resources

Craft SOAP Body (XML API Request) using SoapUI

Now its time to craft some SOAP Request. Check steps outlined here (Use SoapUI tool) . Once you have Request Body XML you can change parameters as per your need.

Creating SSIS Connection for Workday SOAP API call using WSS Security

To create a new connection for workday perform the following steps.

  1. Two ways you can create HTTP connection for the workday service
    First approach: Right click in the connection managers panel and click “New Connection…” and Select ZS-HTTPconnection from the connection type list and click OK.
    — OR —
    Second approach: If you are already on SSIS XML Source or SSIS REST API TASK or SSIS Web API Destination UI then click [New] next to the Connection Dropdown.
  2. Once HTTP Connection UI is visible configure following way.
    1. Enter API URL for Workday (Make sure you don’t enter WSDL URL  ). Your API URL will be something like below.
      Syntax:  https://<workday host name>.workday.com/ccx/service/<tenant name>/<service-name>
      Example:
        https://wd1-impl-services1.workday.com/ccx/service/MyTenantID/Human_Resources
    1. Select credential type as SOAP WSS (This setting is only found in v2.6.4 or Higher)
    1. Enter your workday userid and password
    1. For WSS password type setting you leave it default (Not set) or change to PasswordHash for more secure communication.
    1. Click OK to save. Get more skills from Workday Online Training

Loading SQL Server data to Workday using SSIS

Let’s look at the real-world scenario. You have Accounts table stored in SQL Server and you like to create same accounts in Workday by calling appropriate API calls.

  1. Drag Data flow task from SSIS Toolbox. Double click to edit.
  2. Drag OLEDB Source configure to read SQL Table (e.g. Accounts)
  3. Drag ZS Template Transform from the toolbox. Connect OLEDB Source to Template Transform. If you need flexible XML Generation then use XML Generator Transform but it may require some learning curve so for simplicity we are skipping that from this article.
  4. Enter your SOAP request in the template text (like below) you like to call (This is obtained from the previous section – using tool like SoapUI)
    For Example: To create a new account you can use enter like below. Replace xxxxxxxxxx with Columns placeholder.
    To insert Column name as Placeholder click <<Insert Placeholder>> and then Select [Columns] node. Template Transform outputs column name TemplateOutput. You can use this as Body to feed next step (i.e. Call Workday API using Web API Destination )When you insert placeholder to make sure you use XML Encoded Columns if you expecting a Long text or special characters part of your data. Learn more from Workday Certification
    Syntax for encoded value is    <%CustomerName,FUN_XMLENC%>  . You don’t need FUN_XMLENC for numeric fields. For normal placeholder without encoding use just name with column placeholder indicators e.g.   <%Amount%>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:bsvc="urn:com.workday/bsvc">
   <soapenv:Header/>
   <soapenv:Body>
      <bsvc:Workday_Account_for_Worker_Add>
         <bsvc:Worker_Reference>
            <!--You have a CHOICE of the next 2 items at this level-->
            <bsvc:Employee_Reference>
               <bsvc:Integration_ID_Reference>
                  <bsvc:ID>xxxxxxxxxx</bsvc:ID>
               </bsvc:Integration_ID_Reference>
            </bsvc:Employee_Reference>
            <bsvc:Contingent_Worker_Reference>
               <bsvc:Integration_ID_Reference>
                  <bsvc:ID>xxxxxxxxxxxx</bsvc:ID>
               </bsvc:Integration_ID_Reference>
            </bsvc:Contingent_Worker_Reference>
         </bsvc:Worker_Reference>
         <bsvc:Workday_Account_for_Worker_Data>
            <!--type: string-->
            <bsvc:User_Name>xxxxxxxxxxxxxx</bsvc:User_Name>
         </bsvc:Workday_Account_for_Worker_Data>
      </bsvc:Workday_Account_for_Worker_Add>
</soapenv:Body>
</soapenv:Envelope>

To get in-depth knowledge, enroll for a live free demo on Workday Training

Automated Testing for Workday

Quality Assurance at the Speed of Business

Workday offers enterprise-level software solutions for human resource and financial management. With Worksoft’s proven test automation software, Workday users can be confident that all end-to-end business processes, including those that integrate with other apps, run as designed.

With automated functional testing, companies can be sure their HR processes operate without disruption, even when it comes to frequent changes in intricate approval chains, tax calculations, employee benefits, withholding classes, payroll and more.

SIMPLE PROCESS DISCOVERY AND DOCUMENTATION. Workday helps enable business processes optimized for your HR team’s success. But it can be a challenge to gain real-time visibility into those processes, especially when they span multiple apps.

There’s a reason Worksoft automates not only the testing, but the discovery and documentation of these critical processes across the entire enterprise landscape. With Worksoft you can quickly discover, document, and analyze your company’s actual business processes — without costly interviews or coding. For more workday Training

MAKE DECISIONS WITH CONFIDENCE. Worksoft validates both Workday’s standard functionality and your organization’s own unique workflows. Worksoft also helps ensure quality across global business processes, allowing users to easily modify and execute automated tests across different geographies.

This allows your organization to support varying HR practices, multi-country operations, and regional laws and regulations. Make faster, strategic decisions with Workday, and know every process will work, every time. For more info Workday HCM Online Training

MANAGE CHANGE IN A COMPLEX LANDSCAPE. Workday is likely just one system in your application landscape – most large companies have hundreds! Worksoft automation enables complete coverage of business processes in any environment – whether cloud-based, mobile, web, or custom applications.

And with cloud-based applications like Workday, it is more important than ever to test every step – because changes in the cloud can be frequent and complex. Fortunately, you can manage your company’s exposure to technology risk with Worksoft’s high-velocity business process testing and discovery.

To get in-depth knowledge, enroll for a live free demo on Workday Online Training

Design a site like this with WordPress.com
Get started