For the fourth time, agilists from around Cox Automotive convened to share experiences and network with colleagues. The 2016 edition met in Kansas City, MO on August 30-31; there were 120 of us representing 10+ business units. Cox Automotive’s Director of IT, Rene Rosendahl, was the organizer and host for the meeting. He and his team of helpers did a great job keeping the trains running on time! By a show of hands, more than 50% of the participants were attending their first Cox Auto Agile Open. Read more
Neumont University recently invited Dealertrack DMS to speak at one of their tech talks during their career fair week. Neumont is a local university in Salt Lake City, Utah that specializes in technical degrees and enables students to graduate with an accredited bachelor’s degree within three years. On the day of the event, two of our managers, Alan Harwood and Mark Quigley, met with and interviewed several of the students graduating that semester for open positions within Dealertrack DMS. After a break for lunch, Neumont alumnus Dru Hurdle gave an overview of microservices to a group of 100+ students based on the work he contributed to the Dealer Management System (DMS) acceleration project. Afterward, the students had an opportunity to ask questions of the attending Dealertrack DMS group. Below are a few of the questions and responses from that session. Read more
On April 18-19, the Dealertrack and Dealer.com engineering teams participated in a nationwide engineering event. This year, each location came up with their own unique project to enhance their skills and experiment with new ideas. Participating locations included Burlington, VT, Dallas, TX, Manhattan Beach, CA, Groton, CT, Sacramento, CA, Lake Success, NY and Salt Lake City, UT. Read more
Abstract: Within the engineering environment at Dealertrack, testers have two main pathways they can pursue: non-technical or technical. The information presented in this post is written with our context in mind, but other companies in the larger Cox Automotive family may also share similar frameworks in structuring their testing community.
Back in December of 2014, my team and I were given the challenge to grow capacity within our Professional Services (PS) team. The challenge was to double the size of our release train from 4 to 8 teams by the end of 2015. While we had grown our teams in Burlington, adding two new teams in Dallas where we had no presence seemed like a big challenge. At that time, a couple of other release trains had Dallas-based teams learning the Dealer.com platform or were hiring new teams. While visiting Dallas, I asked several team members what was and wasn’t working for those teams. One thing I heard was that sending someone down for a week or two of training wasn’t enough; there needed to be some level of dedicated support for the teams. This heavily influenced how we resourced and approach on-boarding. Read more
The journey to unconscious competence, and how we move toward that as testers, has been on my mind lately. You may already have some exposure to unconscious competence and the four stages of learning. If not, don’t fear; we will be covering it here. So, let’s jump in and all get on the same page.
In August of 2013, Dealer.com began its all-in transformation to agile from partial waterfall, partial ScrumBut. In Part 1 of the story, we discussed the initial presentations made to project management groups more inclined toward traditional waterfall methodologies. In Part 2, we look at our interactions with the agile/Scrum community in Southern California.
I want to pose a question: Is a generalized simplified model for priming novice testers helpful or confusing?
I struggle with this question when creating training material and teaching testers. It’s a natural inclination of mine to want to distill what I have learned into some concentrated and easily digestible brew. Something that testers can take, use, and then become better testers. The catch though, is that I know that’s not possible with testing. Anything I create at best is going to be an imperfect model. In some cases it may help, and in others it would surely fall flat. Another concern of mine is novice testers on the hunt for a silver bullet or a magical list to follow. Since testing is about critical thinking and not lists, I want to try and avoid anything I create being used in this way.