Uncategorized

Friday Q&A: Longin Jan Latecki of Temple University Summer Research Program

If you ever want a robot to be able to get you coffee, they have to be able to see. So, really, Dr. Longin Jan Latecki, a computer science professor at Temple University, is doing us all a favor. Latecki, whose research focuses on the half-century-old concept of computer vision, is one of 22 Temple […]

latecki
If you ever want a robot to be able to get you coffee, they have to be able to see.
So, really, Dr. Longin Jan Latecki, a computer science professor at Temple University, is doing us all a favor. Latecki, whose research focuses on the half-century-old concept of computer vision, is one of 22 Temple faculty who are participating in the university’s inaugural Summer Undergraduate Research Program (SURP).
The program gives students the chance to earn up to a $4,000 stipend, funded by an equal match between the College of Science and Technology and the researcher’s grant.
Latecki is originally from Poland and is one of two professors working on more than one project for SURP. He came to Temple in November 2001, after stints at the Technical University of Munich and the University of Hamburg, both in the storied German university community.
SURP, which includes faculty from Temple’s CST, the College of Engineering and the School of Medicine, aims to bolster the research chops of Temple undergraduates. More than 270 students applied for the program, and some 150 interviewed with faculty for just 40 available positions during a university event held on March 31.
Below, Latecki, who is also leading a project on the interaction of light with matter, talks to Technically Philly about SURP, his computer vision research and what it takes to get a robot to get me some damn coffee.
Interview edited for length and clarity.
Talk to us about your academic pursuits.
My main research area is computer vision, finding ways to understand images the way humans do and to do it by intelligence testing… We’re in the age of digital photography, so getting digital images into a computer is easier than ever, but understanding what is in the image, well there still is a lot of work done to be done there. You can find a car or chair in an image [with] no problem, but computer software is still not there. Computers do a lot of things better, but in basic cognitive abilities, they still don’t match what humans can do.
Tell us about SURP and your involvement.

“If you want robots running around doing useful things, we need to give them useful vision abilities.”-Dr. Longin Jan Latecki

I have a research grant from the National Science Foundation and the college [of science and technology] that lets you apply for this supplement, this [matching grant]. …Our dean supports research for undergraduates, and I can really support this program… so I got involved. The main change is to really get undergraduates exposed to, to get them a better understanding of challenging problems. Whatever we do, we get them thinking and researching like a graduate or more… Three students work with me for 15 weeks. They started May 18 and work until the end of August. That way they can understand better what I’m doing and really contribute and learn something.
Explain your SURP project like I was a 10-year-old.
[Laughs] The key idea is to, well, if you have a computer algorithm that is supposed to give a computer human-like ability to detect and recognize images, how do you test the algorithm to make sure what it is supposed to do is what it does? You have [a] standard test data set. It would be there, say 1,000 images, 100 of a car, 100 of a chair 100 of another object and so on, and you have the computer ask what is this image, and you measure to see if it got correct. We are testing, yes, but the problem is that if you just take those photos and the computer doesn’t do a good job, you don’t know why.
If you generate computer images and have that computer evaluate, you have control of… whether the camera picture was missed because the image was too bright or whatever else, because all of those features are very important.
It is very useful to find out if something didn’t go well just as much as what did go well. Meanwhile, we just learned the best algorithms for object recognition [by seeing what didn’t work]. When this stage is reached, we try to analyze the main problems when the algorithms fails, why it fails. With human vision, we are not really aware of the process. When you look for an object of whatever, [we] do it nicely. Our human vision can do it well… but computers still can’t. We want to figure out… what are the biggest problems.
latecki-2Why is this research important?
This is very important… for the future of robotics. The main resource for humans is vision. Most of our brain power goes into visual input and analyzing it. This is the main window into how we see the whole world. So if you want robots running around doing useful things, we need to give them useful vision abilities. This is something very necessary for the future. You may find shortcuts for controlled environments to let [artificial intelligence] develop without vision through some other means, but this is not a robot that can go get a cup of coffee for you.
Tell us about the undergraduate students on your team.
The selection process was very nice[ly] organized through the dean’s office. The matching program is more suitable for putting undergraduates on the job, ones who go through an interview process and are recommended by other professors, which was a two to three-month process. … I am now working with three students with different strengths. They all have a background in mathematics… one is strong in math and computer sciences with some software development, another student is stronger in software development and one student does very nice independent work separate from any specific field… It’s about advancing research and learning, and it’s a chance to mentor them, give them a taste of real applied research, with the grants and a real end goal.
So, what is the goal for this project at the end of this first SURP session?
Whether I achieve it is another question, but if I can really get an understanding of the real challenges of evaluating state of the art object detection and recognition, this will be a great success … This research has 50 years of history, in trying to get computer[s] to analyze images, to get computers to understand. This is not research known to the public because, well, there are not so many useful systems [using the research]. This is something for the future.
Watch a Stanford professor discuss groundbreaking computer vision research in 1971
[tech]O1oJzUSlTeY[/tech]
You’ve come from some prestigious European research communities. Can you discuss Temple and Philadelphia’s technology research community, particularly in computer sciences or robotics?
I think it’s a great environment. Starting in 2000, before I got [to Temple], the university’s computer information sciences department… started growing as a research institution. Meanwhile, since 2008, more researchers have been hired, and we are all working in data mining, machine learning and computer vision. We have a very nice research environment with a lot of interactions within Temple University.
And this is a city of interaction, with places like the information sciences at Penn. Also, we have… interaction with researchers at other departments inside this university. This is definitely a good place to be. The leading research [on computer vision] is done in the U.S. You can see this in Philadelphia. This is a very research active area, particularly in computer science… I’m happy to be here.
[Full Disclosure: The author of this story is a 2008 graduate of Temple University.]

-30-
Every Friday, Technically Philly brings an interview with a leader or innovator in Philadelphia�s technology community. See others here.

Before you go...

Please consider supporting Technical.ly to keep our independent journalism strong. Unlike most business-focused media outlets, we don’t have a paywall. Instead, we count on your personal and organizational support.

3 ways to support our work:
  • Contribute to the Journalism Fund. Charitable giving ensures our information remains free and accessible for residents to discover workforce programs and entrepreneurship pathways. This includes philanthropic grants and individual tax-deductible donations from readers like you.
  • Use our Preferred Partners. Our directory of vetted providers offers high-quality recommendations for services our readers need, and each referral supports our journalism.
  • Use our services. If you need entrepreneurs and tech leaders to buy your services, are seeking technologists to hire or want more professionals to know about your ecosystem, Technical.ly has the biggest and most engaged audience in the mid-Atlantic. We help companies tell their stories and answer big questions to meet and serve our community.
The journalism fund Preferred partners Our services
Engagement

Join our growing Slack community

Join 5,000 tech professionals and entrepreneurs in our community Slack today!

Trending

Philly’s tech workers seek city life, no longer as spread out across suburbs

What does this VC exec and founder look for in an AI startup? Something ‘inherently unique’

The startup that splits time between Philly and DC — and says the challenge is totally worth it

A new model for thinking about how to grow regional economies: the Innovation Ecosystem Stack

Technically Media