(Photo by Margaret Roth)
After spending four years in the Johns Hopkins Biomedical Engineering (BME) undergraduate program, Josh Budman went for a fifth year master’s program in the Center for Bioengineering Innovation Design (CBID) with his classmate, friend, and soon to be cofounder Kevin Keenahan.
Having been firmly on the med school track, starting Tissue Analytics and leading the technical team as CTO was not Budman’s original life plan. But after seeing the inefficiencies and need for technological innovation to be applied in the wound care space through their CBID program rotations, acceptance into the Dreamit Health accelerator program and a deferral of med school, Budman and Keenahan set out to change the way imaging technologies are applied to empower care providers.
“You’re developing ideas, you’re putting them into real life, and people are using them. There is nothing I think that is cooler than that, especially if it can deliver a quantifiable benefit to patients or to users,” he said of the technology.
We spoke with Budman about the series of personal pivots and calculated technological risks that led Tissue Analytics’ to get to where it is now in five years. This conversation has been edited for length and clarity.
Tell us the highlights of your career so far. How did you get where you are now?
My undergrad focus area was imaging which was new at the time, BME gave you a few focus areas I chose imaging because I was really into the computational aspect. In the CBID program we had a focus on wound care. It was during our clinical rotations where we saw how archaic wound care was. We actually saw providers literally arguing with their patients over whether or not their wounds were healing. We thought: “Okay that was a problem,” but we identified other needs in the wound care space. It wasn’t until we got later in the year that we realized that the field that was least explored and best suited to our backgrounds —especially mine being in the imaging space — was that of automating, making more accurate, making more efficient, the actual measurement and documentation of wounds, therefore stopping the noisiness of the wound care documentation field. So that’s what we set out to do with a focus on the software piece. That’s how Tissue Analytics was born.
During the CBID program I was applying to medical school, I ended up getting in to McGill and at that time both my parents very much wanted me to go to med school. At the same time Kevin Keenahan, our CEO and my cofounder, and I got into the Dreamit Health program in Philly. I’ll never forget it: I was at the World Cup. It was the first vacation that I had taken in awhile. I was messaging Kevin and he was like “Hey! We got into the Dreamit program, we have like 24 hours to decide — we can’t go with one founder, we have to go with two.” I knew that if I went for the Dreamit program I would have to defer, and I said okay. One year. Mcgill will give you one year, so I deferred.
During that year we got funding from Tencent and other angel investors. It was really exhilarating to see machinations of my brain and the technology getting into the hands of users and eventually patients. I thought this series of very fortunate events is probably not something I can guarantee ever happening again. But, McGill gave me one year to defer only. I withdrew from med school, and I haven’t looked back since.
Explain imaging to the non-BME, and then explain some of the aspects of your tech stack. What’s unique about it?
Imaging can be anything from the tech between medical imaging devices, so the type of stuff large companies like Siemens makes like large scanners and software for MRIs, or into the field of digital image processing, which is what Tissue Analytics does. Our software looks at images and tries to extract knowledge or information from images automatically. You get an image and we do a semantic segmentation or interpretation of the image, so we see an image and try to tell a clinical user what’s inside the image in a very accurate way. Lately, with the boom of deep learning, machine learning, and artificial intelligence, imaging tends to incorporate a lot more of that. When I was studying at Hopkins, the AI and the deep learning space was not very well explored, it was just in its nascency in terms of how commoditized it was.
Even when we started the company, the deep learning technology was a lot less commoditized and we didn’t come into this with a large dataset. We had collected a few images in our time rotating through Hopkins. That was what we used to train our initial data, but our initial insight was that we should let the system improve over time. Thankfully we read the tech landscape correctly. These technologies have become so much more commoditized. It’s really our dataset that defines our technology now. We have a dataset that is second to none, and that’s why we can leverage the really great innovations in the deep learning space that have come out since 2013, 2014.
For example, the use of convolutional neural networks (CNNs). There are now so many frameworks that are built into programming languages that people are used to. With an intermediate knowledge of Python and a good dataset you can start to incorporate these convolutional networks into your algorithm. Of course, it requires expertise to incorporate them in a marketable way that is suitable to end users — that’s the hard part — but on an academic level it’s really easy to incorporate some of these convolutional networks. The combination of the CNNs, and how commoditized they’ve become, with a great dataset is kind of where we find ourselves able to provide extremely accurate data in a very automated highly reliable, high integrity system.
In 2015, when we had maybe enough data to train a convolutional network, the frameworks available to someone like me, that wasn’t directly developing novel convolutional network software, were extremely difficult to use. You had to modify all sorts of config files on your server and it was really messy — all sorts of plain text files and formatting was a big deal. Now everything has become so robust, you can just bring methods into your code and you have a convolutional network running on your server. It’s really different now than it was even three years ago. I will say that to myself and Kevin’s credit, we did read that correctly. We had to take a risk, but we thought okay this is going to catch up. And it did. And now the level of quality and commoditization of deep learning technologies has definitely caught up to the quality of our data. That’s been de-risked and we didn’t have to build the framework to do it.
I think what’s unique about our stack, and companies are more and more especially in health IT and deep learning are trending more towards this, is the combination of different languages. We have our deep learning stuff that all runs in Python and that’s one isolated component of our software. Then we have our back-end and business logic which is in Java. We have our UI components, which are mobile and web. But what’s unique is that we kind of unite all of this stuff in different languages and then on top of that foundation, we build integrations. That’s the other thing that makes our company unique is that in addition to the realization of how valuable the data was and that potentially sacrificing some initial quality and waiting for the algorithm to get perfect, we released something with a little lower quality and saw that improve. The other big realization that we made was that no matter how good your software is if it doesn’t integrate with the electronic record and if it doesn’t integrate with the end user workflows in a clinical setting — no one is going to use it. I often go so far to say, and I don’t even know if this is true but I’ll say it anyway, even if your software cured cancer, people wouldn’t use it if it didn’t integrate with their workflows in a healthcare setting.
A lot of people talk about tech replacing doctors or providers. I personally, and I’m not saying this is a monumental claim, I do not see that, at all. The best way the space can go is allowing software to serve as an aid to and maybe even a recommender to help make decisions easier for providers and deskill decisions. A lot of providers in many spaces, wound care especially, suffer from decision fatigue. I think relieving that decision fatigue problem is a big one because I don’t think healthcare providers are defined by the rudimentary decisions, it’s the hard decisions. Right now we’re just allowing clinicians to collect better data more accurately, and more efficiently, but that could potentially unlock a decision aid in the future, and a decision fatigue reliever. That’s where I see it optimally going. Making more standard data driven decisions and allowing providers to do what they are better at. But definitely not taking their jobs away. I don’t see software going that way and I don’t intend to make software that does that, not in a million years.
We like human interaction, no one wants to be diagnosed by a machine. Sometimes when it’s hard to see a provider or when you want a quick decision made immediately, that’s one of those moments where deskilling the decision is something everyone would be happy about. Providers don’t want their offices filled up by these, they didn’t study for years and years to see very basic and obvious things. Being able to bring those decisions down to a user that has not trained for years and years to make hard decisions but can see the software and knows what it means and can be empowered to be the frontline provider.
What in your life has made you so okay with risk?
I have been to date the most risk averse person. One of the reasons I wanted to go to medical school was because — I don’t want to sell that field short, it’s risky in that there’s no guarantees in reaching your goals in medicine — but it’s a linear path and the prospects of just straight up failure are very, very small. That’s one of the reasons I thought it was a good path. So I think after I decided to do this instead of that, I realized that my whole life has been choosing the less risky path. Even throughout my academic career I was hedging. I would try to put eggs in a bunch of different baskets to hedge my bets.
For instance, I ran track and cross country all four years at Hopkins, I don’t regret it and it was one of the best experiences ever, but I did it so I could have something to lean on in case school didn’t go well. I realized at a certain point that I’ve just been hedging the whole time and when I made the decision to go all in on Tissue Analytics, I just wanted to see what I could do if I put all of my eggs in one basket. I’ve had this epiphany lately. I’m still very, very risk averse in other areas, but I want to see how I can facilitate the success of an entity with this great team if I just put all my eggs in this basket. That’s sort of how I think now. I want to see how far it can go with me being all in.
When it comes down to it, why did you start this company and what about it keeps you so motivated?
I think initially I started this company in year one because it was something exciting to do and something that I wouldn’t have the opportunity to do again, and it was in line with the same values that made me want to go to med school, which was delivering care to patients and helping improve their lives. As I got to the point where I had to decide between this and med school, the big thing for me was that it was just such a good way to utilize my engineering skills and what I had learned. I’m not saying that medicine wouldn’t have been, but this was a way to directly apply my engineering skills to concepts that were eventually delivered to an end user. That’s as raw as it gets. You’re developing ideas, you’re putting them into real life, and people are using them. There is nothing I think that is cooler than that, especially if it can deliver a quantifiable benefit to patients or to users. These things are coming from our brain and you blink and then people are using them.
And I’ve seen it happen before my eyes. It’s like our baby. I can see the algorithms get better and better over time, the analysis gets better and better and more and more accurate and now we can analyze a wound or skin condition in three dimensions. We’re still validating this, but we can do that as well as a $17,000 scanner — with a smartphone. To me that is baffling and I think that it’s up to us to bring that benefit to the end users and to the patients. That’s a big motivator for me, that we’ve managed to bring great tech, make it meaningful, and make it impactful on an end user, and to then impact patient care with awesome technology.
What is the one thing that you do, completely outside of your company for yourself?
What keeps me absolutely sane is that I play a ton of soccer. That’s my best form of therapy. I play three, sometimes four days a week. I play in a Volo City league, in a Charm City Soccer league, and I play on a team in the Maryland Majors which is a really good amateur league in this area called the Baltimore Kickers. I’ve been playing with them since 2015. We do one or two practices and a game a week. It’s really competitive, but I love it so much. It’s one of the only things in life that I have an unbridled love for, no conditions, just absolutely love it. Sometimes people say to me, “Oh you must not be able to sleep.” I stay up because I’m stressed, not because I don’t have the hours I need to sleep. I just can’t do anything else. There’s work, soccer, and sleep. If it’s after work and soccer, there’s no time for anything else.-30-
This DuClaw exec founded a startup to help craft brewer-distributor sales go down easier
A live-in accelerator is launching in Baltimore. Here’s how the organizers are designing it to be pandemic-proof
Baltimore cybersecurity company clean.io raises $5M Series A
NextStep Robotics raises $500K while eyeing product launch
Sign-up for daily news updates from Technical.ly Baltimore