Tech

Secure Automation Testing at Scale Leveraging SBOX - Test Guild Podcast

In this Test Guild Podcast episode, Michael Palotas and Lee Walsh discuss secure automation testing at scale leveraging SBOX

Secure Automation Testing at Scale Leveraging SBOX - Test Guild Podcast

Session Transcript

JOE COLANTONIO:
Hey, it's Joe, and welcome to another episode of the Test Guild Automation Podcast. And today we'll be talking with Michael Palotas and Lee Walsh, all about Secure Automation Testing at Scale, leveraging something called SBOX. If you do anything with enterprise testing, you want to listen to this episode. If you don't know, Michael is currently the head of product at Element34, which is the market leader for enterprise testing grid infrastructure solutions that run inside the corporate firewall, which I know is a big, big deal for enterprise companies. He was the head of Test Engineering at eBay, where he was instrumental in designing, developing, and open-sourcing the Selenium Grid. He really has a deep understanding of how this all works. He's also worked on Selendroid and iOS drivers. Michael also worked as a Software Engineer at Intel and Nortel, so he really knows his stuff when it comes to enterprise application testing. Also joining him we have Lee Walsh. Lee is currently the Director of Customer Success at Element34. So he speaks to a lot of people. So excited to get his insight on this. Lee and his team primarily focus on ensuring that Element34 customers are getting the most out of SBOX, which is Element34's flagship product. We can learn all about SBOX and why you need to know about it, especially if you're working in an enterprise. Prior to Element34, Lee was the team lead at BrowserStack, so he has some understanding of the space as well. He has a lot of knowledge and a lot of great information about automation testing and how to scale at the enterprise.
Hey, guys. Welcome to the Guild.

MICHAEL PALOTAS:
Hi, thanks for having us back.

LEE WALSH:
Hey, Joe. Thanks for having me.

JOE:
Good to have you both here, Michael and Lee.

So, Lee, this is your first time on the show but Michael, I know we talked, I think in 2020, so welcome back. I'm just curious to know for people that may have missed that episode, which was I can't believe it's almost three years ago. At a high level, what is Element34 or what is SBOX? Maybe we can just ease into it that way and give people a little flavor of what to be in store for?

MICHAEL:
SBOX is basically a behind-the-firewall on-prem test automation infrastructure solution that runs completely secure inside your firewall. No data going out and no external access is required from the outside. That's really in a nutshell what it is.

JOE:
Awesome. And so once again, I want to remind listeners that we spoke to you earlier and you also. So you created one of the first implementations of Selenium Grids. I'm curious to know, like how that influenced the solutions that you're working on now?

MICHAEL:
Yes, what you said is both correct and incorrect. Let me provide some background to clarify. Back in the day, I oversaw quality engineering at eBay International. Given the scale and pace at which we operated, automation emerged as a crucial factor in delivering top-quality software to our customers. Keep in mind, this was in the early 2000s. We began our search for a suitable tool and settled on Selenium. Our initial approach to automation was like most—adding one test after another and running them sequentially. However, we soon realized that this method wouldn't scale. We needed a way to run tests in parallel and across different browsers. The challenge was that there wasn't a tool available, even within the Selenium ecosystem, that met our needs. So, we decided to develop our own solution.

During this time, we were in contact with key figures from the Selenium project, like Simon Stewart. Once our internal solution was ready, we discussed the possibility of integrating it into the Selenium project. This collaboration led to the birth of Selenium Grid. Now, to address the point of contention: while I played a pivotal role in the process, I didn't personally write the code for Selenium Grid. That credit goes to François Reynaud, our current VP of Engineering, along with Christian Rosenboldt and Kevin Menard. My primary responsibility was overseeing the open-sourcing aspect, ensuring the integration of Selenium Grid into the Selenium project, and facilitating its open-sourcing.

JOE:
Michael, when did you first work on Selenium Grid then? It's been quite a while what, 10 or 12 years at least?

MICHAEL:
Yes, around that. I think we introduced it at the very first Selenium conference in San Francisco. I believe that was in 2011 or 2012. It’s been a while and of course, a lot has happened in the Selenium space and with Selenium Grid. It's been around for a while and we're happy that we made our contribution there and to change the world.

JOE:
Awesome. Lee, I want to get you in there quickly. Just a random thought. You're from an infrastructure company, another company that did kind of like what you all are doing now at Element34. What got you interested in this space or what are your thoughts on Element34 and why it might be different from what you've worked on in the past?

LEE:
Yes, so previously, I would have worked with a traditional SaaS solution in this industry. So, your testing infrastructure is great. You get access to it. You run tests either manually or in an automated fashion. But where that infrastructure sits is a key component. So that would sit in the vendor's public cloud. Over time, I saw a use case of some potential customers not being able to use this because they wanted to use real data or they needed to test their early-stage environment. So that got me thinking. I did a little bit of searching, came across Element34, and then was obviously put in touch with Michael and Francois. And then, yeah, straightaway I understood the key benefits of the product and the offering that they have, and it was something that I wanted to be part of as well.

JOE:
Do you speak to a lot of customers? How many people are still using the in-house Selenium Grid from your experience? Is a lot of people?

LEE:
Yeah, you'll find that there are people who still use the in-house solution as well. So, they'll have a combination there, whether that's something like SBOX or they build their own. They'll have their own at certain points for certain use cases, but they also may make use of their build vs. buy conversation. They may be sitting in the buy part as well. So it's a mixed bag, but you'll see people making use of their own grid and making use of that vendor, whether that be a public cloud offering or something like SBOX.

JOE:
Michael, what do you think some of the benefits are then from an in-house grid vs. an enterprise solution, which I think is what kind of separates you? I've spoken to a lot of people over the years, and it seems like your sweet spot really is the enterprise and it's something that is critical that a lot of people like us may not be aware of.

MICHAEL:
Yeah, one way to look at this is: Why do people create their own Selenium Grid in-house in the first place? Let’s start from that perspective. I would typically find that it’s because it's super easy to get started with. Basically, you download the software, you spin it up and there you go. You've got something to show; you've got something to run tests again. It becomes a bit more problematic when we look at things like maintenance. That's a whole different story. There's a lot happening all the time in the browser ecosystem, and in the Selenium space as well. New releases come out frequently. It's becoming very cumbersome and very time-consuming to keep a Selenium Grid once you have it up to date. Effectively that's what you want. You want to make sure you're running your tests, of course, also against all the browser versions, but very or most importantly, you want to test it against the new stuff. So that's where people usually start. Now where SBOX comes in, or how we solve some of those issues, basically we are the only enterprise solution that was built from the ground up to be inside the customer's firewall. 

Typically, if a company decides to build their own grid, they are probably looking at having it inside their own firewall. So, they may just go off and build it on their own. But of course, they end up with all those maintenance issues. That's exactly what we solve because we give you all the bells and whistles and the convenience of the SaaS products because those products are great, right? With the difference that it's running completely inside your own firewall. So there's no data going to the outside. You don't need any external access from outside to get back in. So that's the key differentiator.

JOE:
Nice. So what are some key benefits then that people at the enterprise may need the solution for? They already have something working. I mean sure, we must maintain it, but there are other pieces that they may not think of as like any sort of compliance issues or things like that would make them say, okay, we need this other solution to help us.

LEE:
I can probably address that. So that there are some advantages and some are more obvious than others. But when we speak with potential customers, we normally mention five key considerations. Those key considerations relate to our offering. But the conversation would be quite similar when we're looking at building vs. buying a solution. If buying a solution, where should that be hosted? 

Michael touched on it already, but the consideration is the first two would be the security and compliance piece. If testing within your own corporate network, you're staying secure because you don't need to open that network like you would to a traditional SaaS solution. Remember, if everything involved in testing now is hosted internally, why would our testing infrastructure be any different? You may have the answer to that question, but a question that needs to be answered, nonetheless. If we want to test these as early and often as possible as well, we need to know what's involved in opening that environment. Some will create tunnels and so on, but something that we need to be aware of, and given where SBOX is installed, this is not a concern compared to others. 

The second piece that I mentioned is the compliance piece, and this is straightforward. If no data is leaving my network, then there's little risk of those breaching any data privacy regulations. Also, if we're using real data, we may have agreements with some of our customers that that real data can't be used outside of our network. They're kind of the two main ones that we see come across. 

Performance is another key consideration whereas the infrastructure hosted will impact performance. So logically, if I'm testing within my own network versus reaching out to a solution that may be hosted in a different country or a different continent, we're going to see different results. What are we not to share what that will be? But less latency normally equals improved performance and a reduction in tests potentially failing due to timeouts or order scenarios caused by performance issues. 

The last two that sometimes are overlooked is scalability and cost efficiency. Scalability from an infrastructure standpoint and not from a test group standpoint, I'll answer it from an infrastructure standpoint, is what's involved for me to run my tests at scale? How many tests can I run in parallel or concurrently? What kind of queuing system do we get access to? These are all important for me as I grow on my automation journey, but potentially for my organization as more people make use of the solution I've chosen. Okay. Then the cost efficiency part is straightforward. The cost of scaling or continuing this journey and there's sort of a hidden cost inside of infrastructure, but we won't talk about that today.

JOE:
That’s a good point. A lot of times people say, I have an open-source solution so it costs me nothing. When in fact it does cost them very much. Is it really going to maintain itself? Can you explain a little more about what cost efficiency there may be with SBOX? And if your tests run faster and faster, I assume you’ll get quick results and less time spent, which is going to also save money. 

LEE:
Yeah, so cost efficiency is if we go from 10 tests to 1000 tests. Let's take that example. How much is my infrastructure going to cost me to host that solution internally? Also, do I need a bigger team to maintain this? So the maintenance and management costs and the work that's required to troubleshoot any issues that we see in our own grid. These are all part of that cost that you may not initially consider as you grow or as you start off. But further down the road, it starts to become more and more of a pain or a headache and something that starts to go out of control pretty quickly.

JOE:
Absolutely. I used to work for an enterprise company doing radiology equipment, and it was hard to even get people to like I said, open up the tunnels and the ports for an outside solution and compliance issues with the FDA. So we didn't have an in-house solution, but it sound like a perfect thing for that particular environment. So I guess, in that case, we had thousands of tests. What would we have needed to do then to get our tests to work with the SBOX solution?

LEE:
From a test script perspective, there are very few changes that need to be made. So if we take someone that has Selenium test scripts already written in the language they've chosen or the framework they're making use of Selenium is Selenium when I run a locally to a traditional SaaS solution or on something like SBOX. The important piece is the driver initialization. So where are we pointing our test scripts to? Once I change that from, let's say, a local Chrome driver to the SBOX home address, I can now start to execute my tests. Other things that you might have in play already are the likes of test authoring, so you might be using something to build your test scripts. In the end, tools like that normally create or leave you with a Selenium test script. Within the UI you can then go and point to a home address as well. So very little change is required to move across or make use of something like SBOX. I think that's why this market is so competitive as well. How easy it is for me to go from one solution to another means that everyone in this industry needs to make sure they're on top of their game really?

JOE:
Absolutely. So I alluded to some benefits and some issues of running an enterprise. One of them also was a security, security risk. They said you can't do that for security reasons. I know security is a huge requirement for a lot of enterprises. Do you have any insights on being on-prem? How does it help with some of these security concerns you probably hear all the time from enterprises you work with?

LEE: 
Yeah, So just with SBOX by being inside the customer's network already we have naturally much deeper integration points with the rest of the customer's development infrastructure, be it the ability to hook into their customer ID or identity provider and like something like Active Directory or Open ID or else much more complex integrations like Kerberos NTLM, which may not be possible if you're using a SaaS solution that sits outside of your network. The other kind of talk tracker point to this as well is that with SaaS solutions to get into your network from the outside, you potentially open your firewall whitelist IP that they provide to you to let them back into your infrastructure or create these tunnels that we spoke on. With SBOX, you don't have any of that headache given where we're located or were installed, so we have no idea how many tests are running or anything like that. It's completely airtight.

JOE:
Absolutely. Michael, any insights around security, from your point of view?

MICHAEL:
Yeah, I usually see this security and compliance go hand in hand in enterprises, that the compliance piece is more about what we touched on the data, depending on what kind of data you're using, you may or may not be allowed to send that elsewhere. Then, of course, the security part is what Lee just mentioned, to use solutions that are sitting outside of your firewall, you have to drill a hole into your firewall. You must let them back in. That's typically done through these tunneling mechanisms or essentially that's the VPN. And we all know once you're on the VPN, you can get to anywhere else you want as well. There are certain risks involved when you do that. That's of course, if you're maybe a small startup, maybe not so much concern but if you're an enterprise, or if you're a government organization, those things are absolutely key that they are taken care of.

JOE:
So just a random thought, Does this only work with Selenium or can you also scale like a Playwright test or any other software? Or is this a strictly Selenium-based grid solution?

MICHAEL:
No, it's not. I think the last time we spoke it was and back then (in 2020) the product was also called Selenium Box. A lot has happened over the last three years from a product perspective, also from a company perspective as well. I think probably the biggest piece that we’ve added, in terms of the product, was the ability on one hand to run Appium, for the whole mobile side. Then also we had a Playwright about a year ago because we saw there was traction in the market for Playwright and some customers were starting to ask about it. So we listened to our customers and we implemented that. We do actually have quite a few customers that are using both Playwright and Selenium, and it seems to work quite well. It's funny because when you look at some of these two comparisons, it oftentimes sounds like it's an either or kind of decision. From what we're seeing, it's more of an end kind of thing. Maybe for some teams, Selenium is better. For other teams, Playwright may be the better solution. So what we're trying to do is we're trying to provide one place where all the tests can be run.

JOE:
Interesting. So if there's someone that has a mixed test suite of Selenium or Playwright do you notice how they went the same on SBOX or do they need to do anything different to get them to work? Or is it just consistently that you just point the driver to your environment and you're off and running?

LEE: 
Yes, it's a seamless process for all of the integrations. It is where the initialization is happening. So if I have test scripts built out already or if I were to look at a test script right now, large parts of it would not need to change. What you're actually trying to do and the initialization piece is key. For Appium as well, we would just point to the home address and start obviously would specify an app that we want to test as well. But the script itself would run wherever it needs to run really just for pointing it to SBOX in this case.

JOE:
Nice, a lot of these other solutions you're able to run on different devices because it is in the cloud. It's a SaaS solution. So if you have it low code, then I guess is a solution not focused so much on the devices as well, or how does that work?

LEE: 
Around the maintenance piece? I suppose Michael kind of touched on this area here, but one of the benefits of our product is, yes, it is installed within your corporate network, but we do have the benefits that you would see with a traditional SaaS solution from a maintenance standpoint. So once SBOX is installed, you automate the maintenance part, which will then give you access to the latest and greatest or what your customer base is using to interact with your website or native application. The first installation is done very quickly and from that point onwards you can run the test in parallel and you can automate new browsers, mobile devices, and so on.

JOE:
Awesome. The problem I know a lot of people have is, yeah, I could scale my tests here, but my tests are a mess that they're not going to be able to scale. Do you have any advice on how to help people get to the point where they're like, they can really benefit from something like SBOX?

LEE: 
Yeah. So I touched on scalability earlier from an infrastructure standpoint, but as the test landscape starts to evolve, you'll know from running tests sequentially and once it scales and runs more tests simultaneously concurrently and parallel. Parallelizing tests can be hard. What typically happens is that we take the tests that we have built, and we had built them not designed to run in parallel and we start to fire them all off. The best-case scenario is that they all start to fail. The worst case is when some of them start to fail, and some of them start to pass. We start to see flaky results and we need to start troubleshooting more. So, focusing initially when we're building out our tests to make them more atomic is critical because that ensures that we're ready to run them in parallel or we're preparing for that scale further down the road. I think it's something that you need to be aware of at the start, which is making sure they're more atomic.

MICHAEL:
This is something that even back in the eBay days, we got wrong. We didn't think about it when we started with automation. We just wrote tests one after another, not really thinking, what happens when you say, I have a thousand tests now and here we go, let's just run them all at the same time. I think a lot goes with which data are you using for testing or obviously if you're sharing data between tests that are catastrophic typically, then your tests will have those flaky behaviors. Sometimes it may work, sometimes it doesn't. That's absolutely not what we want.

JOE:
That's a great piece of advice. I always recommend people when they're starting to run right away in CI/CD and start running in parallel just to find these issues for sure. Absolutely, A lot has changed. As I said, it's been three years since we last spoke. I know you mentioned some new features since the last time we've been on the ability to run Playwright, which seems like that's something I've been seeing a trend in as well as a lot of people using Playwright as well. There are a bunch of other things one, Michael, is there anything other, anything else that's built into SBOX and maybe wasn't there three years ago that we haven't covered?

MICHAEL:
That's a good question. Our engineering team worked very hard over the last three years. There are two parts to that: One is to keep up with what's happening in the browser and the ecosystem space. The other is also to add more features. So we've done a lot of work in, for example, adding OpenID Connect as an identity provider system. Also, adding Kerberos NTLM to be able to mimic the user who you're running as and tap into your enterprise identity system. This also goes along with what Lee mentioned because we are sitting inside your network, we have a lot deeper touch points than what you can have when you're coming from the outside. So that allows us to integrate much deeper and tighter with the rest of your enterprise development and test infrastructure.

JOE:
Do you all work with SaaS-based applications? Has someone had to scale a bunch of tests against Oracle or SAP? Do you care about what the application they test is?

MICHAEL:
No, not at all. That's completely up to the customer to decide what they want to test and where that application sits. Maybe one thing to clarify is that when we say on-prem, or behind a firewall, it can absolutely be in your cloud as well. Most of our customers are running SBOX in their cloud, in AWS, Azure, Google Cloud, or similar infrastructures, right? So we don't really care where the rest of your development pipeline is. But that said, typically the customers that come to us want to have everything in one place which is behind their firewall.

LEE: 
And I think to that and there are certain different scenarios that make their considerations that we spoke on earlier and more relevant done and potentially testing something that's already in production available, used as a monitoring tool or something like that. So some of those key considerations become a lot more relevant depending on what you're testing as well.

JOE:
I'll be honest, one trend I've been seeing is that if I go to other testing company websites (not Element34) that have been around for 12 years, or so, I see a lot of AI machine learning popping up in their content. Any thoughts about machine learning, AI, or how it applies to maybe a grid type of infrastructure?

MICHAEL:
Yeah, we're definitely watching the space as well. I think in general it feels oftentimes very confusing about all the solutions that are out there. I think it starts not even with AI and machine learning, but even just with test automation. It can be a bit of a loaded and bloated word that's used to describe a lot of things, and it's sometimes hard to understand what is it exactly what I'm getting from this solution or that solution. AI is one of those shiny new things and everybody is adding it to their pitch to make it the product look attractive. I think what we're seeing with AI is that our customers are using or are starting to adopt AI and machine learning. This is something that we didn't really think of to be quite honest. What we're hearing from those customers that are using AI in their products is that in most cases they have to test with real or at least realistic data in order to train their models, in order to make sure that their algorithms work. So they're saying, we have no other choice than actually using real data to do that. With that, actually being behind a firewall becomes a must-have. That's something quite interesting that we're seeing, which drives an even further adoption for what we're providing.

JOE:
Absolutely. I've worked at a medical company and we use patient data. So this seems like it's a great point. If they were creating an in-house model, they definitely would want an in-house grid solution to run against the test. Great point. So what's on based on that and other things we talked about, what is the future of Element34 SBOX? Anything on the roadmap you can reveal or tell us about?

MICHAEL:
Yeah, there are lots of things on the roadmap, I think our product vision is to ensure that security, scalability, and performance that we are always best in class. That's whatever we do, it's always going to be centered around that. The experts in the space and we understand what problems our customers face and we listen to our customers. So we're going to implement what our customers need. Obviously, technology keeps evolving. Playwright one good example three years ago wasn't there and today it's a major player. We monitor that space and we obviously make sure that as things change we bring it into the product and we add support for that. But first and foremost, as I think software product companies should do, you should listen to your customers. What do they need? That's the most important thing. Then from that, we put together a roadmap. Lee is in very close collaboration with all of our customers. We're constantly listening to them, seeing what is it that they need. That really also helps drive the roadmap. So all in all, we're super excited for the next phase of the company. We see amazing potential in what's happening in the market and what we can bring to that to solve the issues. So we're looking into a bright future.

JOE:
Nice. So I know I keep mentioning it, but I really think the enterprise is kind of overlooked or a lot of people just pick up an open-source thing. I think if anyone has to do anything or cares about security, risk, compliance, scalability, performance, and cost efficiency… all the things we talked about. I think it really much applies to the enterprise. They definitely should be checking out SBOX. So before we go, what's the best way to find or learn more about SBOX or Element34? If someone listening to this and said, OMG, I'm an enterprise and I need to try this.

MICHAEL:
The easiest way is to go to our website: Element34.com there you can request a demo or you can request to speak to someone. We’re happy to show you a personalized demo and see if it fits your needs. 

JOE:
Awesome. Before we go, Michael and Lee, I like to ask this one question: What is one piece of actionable advice you can give to someone to help them with their automation testing grid efforts? 

LEE: 
So to keep things relevant to what we’ve already mentioned: Flaky tests - There's a whole host of different reasons why you might see flaky tests. Try to understand why your tests are flaky. Is it due to performance or to the data that you're using? If it is, then potentially SBOX may be something you need to look at. Review why your tests are flaky and understand why they're failing and come to a solution at that point.

MICHAEL:
I like to give one piece of advice to anybody in the software space, not just around automation, but I always say don't just jump on the first bandwagon. When you read about a new tool that was released or that you read about. I always advise watching the space to see how the particular tools evolve because we've all been there. When you read about a cool new tool, you bring it into your company, you start really relying on it, and then at some point, you find out that it's actually just one person looking after the whole tool and the person moves on and the project dies then you have a real problem, right? So that's one thing I think can prevent a lot of headaches, and a lot of rework, if you choose your tools wisely and don't just jump on the next best thing that you may have read or heard about.

JOE:
Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good.