Mission

Search

Building the Better, More Scalable Algorithms with SigOpt’s Scott Clark

Play episode

An A.I. the model is similar to a boat in that it needs constant maintenance to perform. The reality is  A.I. models need adjusted boundaries and guidelines to remain efficient.  And when you live in a world where everyone is trying to get bigger and faster and have a certain edge, Scott Clark is helping make that possible with his finely-tuned A.I. modeling techniques.

“As you’re building up these rules and constructs for how that system will even learn itself, there’s a lot of parameters that you need to set and tune. There’s all these magical numbers that go into these systems. If you don’t have a system of record for this, if you’re just throwing things against the wall and seeing what sticks, and then only checking the best one, and you don’t have a system of what you tried, what the trade-offs were, which parameters were the most important, and how it traded off different metrics it can seem like a very opaque process. At least that hyper parameter optimization and neural architecture search and kind of tuning part of the process can be a little bit more explainable, a little bit more repeatable and a little bit more optimal.”

More explainable, and more optimal, but most importantly scaleable and reproducible. On this episode of IT Visionaries, Scott, the CEO and Co-founder of SigOpt, a company that’s on a mission to empower modeling systems to reach their fullest potential, explains the basic steps that go into successful models, how his team tweaks and optimizes those models to build more efficient processes. Plus, Scott touches on the future of algorithmic models — including how they will improve and where they struggle. Enjoy this episode.

Main Takeaways

  • Bad Data, Bad!: When you’re building algorithm models you have to not only focus on the data you are putting into those models, but you have to know where that data is coming from and if that data is trustworthy. When you have untrustworthy data — either its coming from an unknown source or is bias in any way — this can lead to models that deliver poor results.
  • Delivering Consistency: While every algorithm needs to be tweaked and tuned at the start, the best way to deliver consistent, scalable algorithmic models is to make sure you are able to define hyper-specific patterns that the algorithm can abide by. When algorithms know what rules they are looking for (such as this person only likes medium sized shirts with stripes) it has a set of hyper-specific boundaries it can operate off of in order to deliver the best results.
  • Where is the Band Conductor?: Algorithms will continue to infiltrate our everyday lives, but the truth is they still need humans to effectively run them, to tune them, and to make sure that the decisions they are making are the right ones. 

For a more in-depth look at this episode, check out the article below.


Article 

An algorithm is similar to a car in that every now and then, it needs a tune up. The biggest difference is instead of rotating your tires or changing out the oil, A.I. models need adjusted boundaries and guidelines to remain efficient.  And when you live in a world where everyone is trying to get bigger and faster and have a certain edge, Scott Clark is helping make that possible with his finely-tuned A.I. modeling techniques.

“As you’re building up these rules and constructs for how that system will even learn itself, there’s a lot of parameters that you need to set and tune. There’s all these magical numbers that go into these systems. If you don’t have a system of record for this, if you’re just throwing things against the wall and seeing what sticks, and then only checking the best one, and you don’t have a system of what you tried, what the trade-offs were, which parameters were the most important, and how it traded off different metrics.It can seem like a very opaque process. At least that hyper parameter optimization and neural architecture search and kind of tuning part of the process can be a little bit more explainable, a little bit more repeatable and a little bit more optimal.”

More explainable, and more optimal, but most importantly scaleable and reproducible. On this episode of IT Visionaries, Scott, the CEO and Co-founder of SigOpt, a company that’s on a mission to empower modeling systems to reach their fullest potential, explains the basic steps that go into successful models, how his team tweaks and optimizes those models to build more efficient processes. Plus, Scott touches on the future of algorithmic models — including how they will improve and where they struggle

Clark boasts degrees from both Oregon State University and Cornell University, and when he was in school he realized something: that the way A.I. and machine learning models were designed and implemented had less to do with actual science and more to do with a tedious guess and check process than anything else. So after accomplishing his PhD, he decided to do something about it by opening up SigOpt, a company dedicated to helping organizations refine their models based on strategic metrics.

“SigOpt is a tool that helps developers with this experimentation process,” Clark said. “Providing a best in class optimization algorithm and allowing them to get the most out of their models as well as a full experiment management suite, providing a system of record for collaboration and reproducibility of this experimentation that would otherwise be lost in a machine learning engineer or data scientist.”

The secret behind every successful algorithm is the data that is feeding it. This means that if you are going to build an algorithm that is not only more efficient, more accurate and more intelligent, you have to find a way to allow those datasets to work for you and not against what you are trying to accomplish.

“The idea here is if you have a time and expensive system that has a variety of inputs and a variety of outputs that you want to maximize or minimize, how do you set those inputs in order to maximize or minimize those outputs?” Clark explained. “Think of it along the lines of you might have a car, the car has a lot of different knobs and levers inside the engine, different gear ratios, various things that you can tune and tweak. You might want to get the highest fuel efficiency, or the highest speed. And you would go about tuning that car in various ways, but you, as the driver of the car might not have expertise to do that kind of thing via intuition. So you would need a system that is not just trial and error. Every machine learning system has hyper-parameters or architecture parameters, or ways that you can transform the way that you’re representing the data itself. All of those magic numbers govern what the end system will end up looking like. So we bolt on top of these pipelines and allow people to do that experimentation, that exploration orders of magnitude more efficiently, in a much more streamlined and seamless way.”

While algorithms have seeped into our everyday lives by helping us make decisions based on our recurring buying patterns or find the more efficient routes based on the time time of day, Clark said that regardless of how they continue to evolve, they will still need guidance along the way.

“Machine learning and A.I. algorithms in general need to be pointed in the right direction,” he said. “They can learn novel ways to solve problems, but they need to have metrics that they care about.”

To hear more from Clark about where algorithms might struggle, and his company SigOpt, check out the full episode of IT Visionaries!

 


To hear the entire discussion, tune into IT Visionaries here

Menu

Episode 297