Or listen in your favorite podcast app
Apple Podcasts / Google Podcasts / Spotify
The world has changed drastically in the last 10 years. From the rise of the smartphone to the move to the cloud to new uses of A.I., how technology is being used keeps evolving. And, it’s not slowing down. Jason Hoffman is the Chair Of The Board Of Directors and President & CEO of MobiledgeX, and on this episode of IT Visionaries, he explains why everyone is asking the question, what’s next? And, the role edge computing will play in the answer.
Best Advice: “Don’t ever have a strategy based on technologies. You want a strategy based on why you’re doing something and then, as different technologies show up, you want to be able to slot them in.”
Key Takeaways
- Technically and architecturally, there is no difference between the cloud and the edge
- The advancement in technology and the demand for more collaborative or interactive experiences means compute speed needs to increase
- The biggest question facing the industry is what is coming next?
All about MobiledgeX
Jason explains that MobiledgeX works on the demand side of edge computing; identifying and building the backend of experiences that people will be looking for on their devices in the future.
“A lot of what we do is on existing devices, not new devices, and looking at what the actual use cases are going to be. What are the new experiences that will show up on these types of devices? How do they take advantage of what is in the network and in the infrastructure, and what will those new backends need to be?”
The state of the edge
Currently, the state of the edge has a lot to do with differentiating edge computing from cloud technology. Edge and cloud certainly go together, but Jason explains that with edge there is much more precision and reasons why you compute at the edge in specific locations. Additionally, edge computing is just getting to the point where it is becoming necessary because our devices need to work faster than ever before. Better technology and more interactive and collaborative experiences are creating environments where faster compute speed is necessary, and to achieve that you need to work at the edge. For these reasons, Jason believes that edge computing is about to become bigger than ever.
“When you think of the world in a very simple way with clouds that connect to about six layers of infrastructure before they hit a device, all that stuff in between is what we call the edge. And if you look at where it typically is, it’s typically in the guts of the mobile networks themselves.”
“Technically and architecturally there really is no distinction between cloud and edge. The only real distinction is that your stuff is off in the cloud and maybe the most detail you know about where it is, is that it’s in Frankfurt or it’s on the East coast of the U.S. You may know on a city or a state or a regional type level. In the case of an edge location, you know exactly where it is. You can walk up and point to where the spot is, as it has a highly precise and accurate GPS location, and there’s a reason why you’re doing what you’re doing in that spot.
“Clearly, the networks of today and the devices of today are good enough for people to talk to people — you can have a real-time bi-directional conversation between two human beings. It’s also clear that we know how to make the infrastructure of today good enough that webpages and videos load fast enough that people don’t really feel like they’re sitting there waiting. And then, clearly we do things well enough where humans can look at something and rapidly learn it if they need to, or go through a longer learning process because they want to learn a language onYouTube or something like that. But when you stop and you think, it’s about a human, it’s about voice, it’s about human vision. And then, it’s about human learning and the internet. And this infrastructure is currently designed to be fast enough for those things to happen. When you start thinking about the human eye being replaced by a camera or a series of cameras, well now we’re getting to the point where the resolution of those cameras is higher than the human eye. It can see further out. So when you start thinking about 32K resolution cameras that can see a kilometer out and you can still zoom in to 4K, that is a higher resolution than the human eye. The second that has to occur faster than we would expect because it’s trying to identify something and make a decision – the current networks that are fast enough for the human eye aren’t going to be fast enough for a computer.”
“Machine learning hasn’t quite caught up with aspects of human learning, but as it does, this whole infrastructure that we’ve built is not going to be fast enough for those things. We are starting to heading into a world where people are talking to machines, and machines are talking to each other, and machines are actually using cameras as eyeballs to actually see things, and they’re actually using computer brains to learn and make decisions.”
The evolution of the industry
Jason has seen a lot through the years, from the rise of the mobile phone to the emergence of apps to the transition to the cloud and now, to the brink of moving to the edge. This is a time that will require a much more distributed model and rely more on machines talking to machines rather than humans talking to machines. The question everyone is asking now is what’s next?
“There’s this sort of question of well, what’s next? There’s going to be something next — it could be glasses, headsets. Or, is a drone next or a car next? There’s sort of this explosion of emerging devices and I think we’re in a similar period of time where, whether it be the next two years, three years, five years, seven years, something is going to show up or needs to show up. That’s basically as interesting as the iPhone was when it did.”