End User Services

What’s Happening in EUS- A Walk Through the EUS QA Test Process

Submitted by Diane Gentile

The Quality Assurance (QA) team in End User Services recently completed testing an updated Task Sequence (2.0) for imaging. Task Sequence 2.0 will be used by the Depot and Deskside support techs.

The QA team tests most technologies that are developed by Desktop Engineering, and over the years, we have become quite successful in our ability to root out problems, and in our “breaking” techniques. Attempting to break things is what helps us identify issues in new technologies before they are released to customers for general consumption, thereby minimizing customer headaches and countless calls to the Service Desk. Here is an outline of what our test process looks like, relative to the Task Sequence technology.

Side Note: If you are not familiar with task sequences and imaging, we’ve included a description at the end of the article below that provides an explanation. Before we get started, we should note that the QA team is not involved in the development of the 2.0 Task Sequence; our role is simply to test it.  Development of this new task sequence is the genius of the Desktop Engineering team with whom we work very closely, and who deserve all the kudos for its creation.

Without getting into the minutiae, the QA test process looks like this:

Step 1: Engineering submits a test request to QA via our “Dropbox” folder in Box.

Step 2: QA reviews the test request, and gathers any additional information needed to begin testing.

Step 3: QA creates a Test Case Document, which includes all the steps for testing – this is our “Master” test document.

Step 4: Using the master test case document, we run one test all the way through to determine if there are any major hiccups, issues or failures. If there are, it goes back to Engineering.

Step 5: Now, we’re ready for the meat of the test process, which we will outline for the Task Sequence 2.0 test. It involves running the tests on a variety of hardware available in the QA lab, new hardware from the depot, and performing the tests using different variables we know will be used “in the real world” of imaging in WashU IT.  This test is broken into two sections, Input and Outcome.   

Input Testing: We test each feature and option on the Task Sequence’s input page. We do that by entering both valid and invalid information into the fields, where possible. We also test to see what happens when you do something unexpected during the process. Some examples include:

  • Cancelling the process in the middle of entering data into the input screen – after cancelling and rebooting, does Windows still load? Is user data still intact? Etc.
  • Change input fields after the auto-generated computer name (based on inputs) is already generated – does the computer name change accurately as you change inputs? 
  • Enter an invalid WUSTL Key ID – are you allowed to continue with an invalid WUSTL Key (hopefully not!)?

We are doing our best to break it, and we admit it’s what makes our jobs more fun! If any attempts to break it result in failures or unexpected consequences, it goes back to Engineering for some tweaks. 

After inputs are complete and an image is successfully installed, we test the outcome of the newly imaged computer.

Completion or Outcome testing: Next, we will test that the Task sequence and image completed successfully. This includes numerous verifications, including:

  • Is the completion email generated and sent to the tech performing the imaging?
  • Does the newly imaged computer have the name generated during the input process?
  • Is the newly generated object in the correct Organization Unit (OU) in Active Directory?
  • Has all captured user data been restored as expected?
  • Is all expected software installed?
  • Additionally, there are a number of other “checks” our team performs whenever we test any new image or task sequence: think driver verifications, Bitlocker status, group policies applied, system tray apps running, etc.

We tested using at least 15 of the dozens of different unique department configurations (UDI’s) that are available within the Task Sequence to ensure all department specific software was installed. Tests were completed on both old and new Dell laptop and desktop computers, including models with the latest Tiger Lake processor. Surface machines were also tested. And every step of every test is documented.

If the testing team runs into any issues at any point during this testing phase, it goes back to Engineering for whatever tweaks are needed. Then we begin the test process all over again. 

Step 6:  Once all test results are reviewed and the QA team is satisfied with the results, we provide a sign-off document to Engineering. The Sign Off document outlines our approach, high level test steps, and a summary of results. It is then up to the Desktop Engineering team to review the results and approve the technology for production.

The Task Sequence 2.0 test completed in about seven weeks, but is not yet in production.  We’ll leave the thunder for those announcements to the Desktop Engineering team.   

That is the QA testing process in a nutshell.  If you are interested in learning more the QA lab in 4480 is open to visitors, appropriately masked, of course!    

What is a Task Sequence and what it is used for? 

Simply speaking, engineers create Task Sequences to automate steps to deploy an Operating System image to a computer. This is done using Microsoft’s System Center Configuration Manager (SCCM), and is presented to the imager in a nice, neat GUI. The Image itself is comprised of the Operating System (Windows 10 typically), and any added software such as MS Office, Adobe Reader, browsers, etc. As part of the automation, the Task Sequence can capture and restore User data, if needed. Unique department identifiers (UDI’s) can be automated for each department. UDI’s are used to identify and install additional or different software packages that a specific department may require. There are dozens of UDI’s included in our task sequence.