News

StradVision’s CEO Junhwan Kim talks about driver-assistance technologies

2020.06.26

 

StradVision is a pioneer and innovator in vision processing technology for autonomous vehicles and advanced driver assistance. Its AI-based camera perception software allows Advanced Driver Assistance Systems (ADAS) in autonomous vehicles to reach the next level of safety, accuracy, and driver convenience.

The company made headlines recently when it received funding from leading global automotive supplier AISIN Group through its Corporate VC Fund, managed by Silicon Valley-based global venture capital firm Pegasus Tech Ventures.

To understand their views and product in the lights of growing demands for driver-assistance technologies for SUVs, sedans, and buses, we approached StradVision’s CEO Junhwan Kim and Anis Uzzaman, General Partner and CEO of Pegasus Tech Ventures via email.

Both leaders were happy to respond to our questions. Before joining StradVision, Junhwan Kim spearheaded Olaworks, a facial recognition company that became the first Korean company to be acquired by Intel.

You can read the complete interview below:

Conventional ADAS technology, used in level 1/3 autonomous vehicles, is almost everywhere. But the next big thing in this market segment is the development of new ADAS systems for fully autonomous vehicles (levels 4 and 5). Can you tell us about StradVision’s deep learning-based camera perception software for highly integrated autonomous systems, and how does it work?

Junhwan Kim: ADAS up to level 2 is quite common nowadays, but since starting level 3, all legal liabilities are on the automaker’s side, so it is not quite prevalent yet since public and private infrastructures are not established accordingly to accommodate such technology (e.g., insurance, policies, roads, etc.). Currently, as of 2020, the average investment that goes into a mass-market ADAS vehicle is $250 – this includes the cost for the camera, chipset, software, etc. – and the camera is the dominant winner in this category. Apropos, even if the cost of other sensors, such as LiDAR, goes down dramatically, we doubt that it will fit into the increasing investment volume/rate of the ADAS solution, which is around $50 a year. Furthermore, as time goes by, the functionality and accuracy will grow, which will eventually offset the Unique Selling Points of other sensors – the camera can do virtually everything ADAS functions require.

Regarding level four and five autonomy, StradVision already has a level four people mover mass production project in the EU – the shuttle will be deployed in 2021 and will transport people from mid-sized towns to urban areas, and vice versa. The project utilizes multiple sensors from LiDAR to the camera and is a sensor fusion project. At least soon, levels four and above will assume the ‘sensor fusion’ model, where the key strengths of each sensor will be exploited to provide the safest self-driving experience.

But over time, we believe the share of camera-based perception will increase due to the inevitable technological advancement of the ‘jack of all trades’ camera and the perennial cost factor. On the technical details of StradVision’s perception software, SVNet, the deep learning, and camera-based embedded network/algorithm enable cameras to execute high-level functions such as Object Detection, Lane Detection, Free Space Detection, Distance Estimation, and much more.

Based on the output of the aforementioned data, automakers can utilize the data to run most ADAS & Autonomous Vehicles functions. One thing to note is that the size of SVNet is very small and lean, so it has much more freedom and legroom to do more things for ADAS & Autonomous Vehicles with high accuracy.

Autonomous vehicle engineers face a mountain of challenges. One is the development of software stacks that can execute hundreds of millions of lines of code depending on the countless number of scenarios on the road. Can you tell us about the key challenges you have overcome during the development of ADAS for driverless vehicles?

Junhwan Kim: Data being an imperative of any kind of software development nowadays is apparent. For ADAS and Autonomous Vehicles, not only the acquisition of data is important, but also applying and optimizing the data is critical as well. Our job at StradVision is to enable millions of these data to be efficiently processed on the edge device, that does not cost an arm and a leg, through our deep learning-based perception algorithm, SVNet. To make this happen, we developed SVNet to be as small as possible, to begin with – we initially targeted the smart glasses industry – and from that point, we optimized the software to be as efficient as possible on target automotive-grade hardware. But this presents a few problems.

First, fitting in a robust deep learning-based software into a market realistic and economical hardware means that the software needs to accommodate a plethora of hardware limitations. Though our specialty is indeed network compression and hardware optimization, to balance the size of the software and meet automakers’ expectations is always a herculean task. But despite all this, we overcame this challenge a while ago and already has nine million level 2 ADAS and level 4 People Mover in production, from China to Germany.

Another challenge we overcame is in relation to data. Creating a robust AI requires a tremendous amount of data and, of course, labeling. Whereas automakers and our competitors utilize armies of data labelers to annotate data in order to train their algorithms, StradVision has an auto labeling tool that automates 97% of the process, which makes network training extremely scalable with a result of increased accuracy and safety. Furthermore, since the remaining edge cases are scrutinized by our data specialists, and we directly apply the risk-factors into our software to anticipate and mitigate them.

For algorithms to “see” and make correct real-time decisions under all driving conditions, they require the right suite of sensors, combined with enough hardware performance. This involves a great deal of hardware and software issues. Many autonomous vehicle companies build their sensors that fit the specific requirements. As a software solution provider, how do you plan to deal with market challenges and innovations in sensors and processors?

Junhwan Kim: In 2020, the average investment that goes into a single mass-market ADAS vehicle is $250. Various companies tout their technical prowess, but they often gloss over the fact that the hardware (FPGA, GPU, SoC, etc.) that enables their technology far exceeds the aforementioned budget and how many vehicles they deployed in the real world. Developing bleeding-edge technology is one thing, but deploying it in a safe and accurate manner to be immediately utilized by the masses is a completely different story.

We at StradVision anticipated from the get-go that the mass-market would not exponentially increase ADAS & AV investment in the short to mid-term and focused our technology to be compatible with whatever technical and financial legroom the mass-market automotive industry has. Accordingly, we developed our software to be as small, lightweight, and hardware agnostic so automakers can adjust accordingly and customize the vehicle model to their needs.

Furthermore, by sticking to the fundamentals and doubling down on the deep learning and camera-based perception software enabled for our creating a niche that gave us a clear identity and edge –camera-based perception will inevitably be ubiquitous, as it can handle virtually all ADAS & AV related functions and can eventually offset the inherent drawbacks via advancement in software.

Is it 100% possible to accurately predict all likely human behavior on the road, including potentially irrational responses in various situations? What is your take on overcoming limitations to ensure active, autonomous safety?

Junhwan Kim: We do believe that most human behavior will be able to be predicted in the mid-term in a closed environment/situation such as an in-cabin driver or vehicles on a 4-lane highway. The complexity of the environment and its unique context will be core determining factors, but pending data accumulation and deep learning and AI development, environment variance can be overcome over time.

On an immediate note, to ensure active, autonomous safety, we both streamline our software development processes and follow strict functional safety requirements always mandated by the automotive industry. StradVision last year acquired the coveted Automotive SPICE Capability Level 2, which is quite an achievement for a deep learning-based software developer. We also met the Guobiao standard for the front-facing camera in China.

*********************************

Junhwan Kim’s views are echoed by Anis Uzzaman, General Partner, and CEO of Pegasus Tech Ventures, which has a unique approach with a Venture Capital-as-a-Service (VCaaS) model for corporations around the world.

What is your take on autonomous driving?

Anis Uzzaman: Autonomous driving is a fascinating topic. If we think about it, people have been putting their safety in the hands of Uber drivers and airplane pilots in exchange for convenience, efficiency, and cost savings. Interestingly, the number of states allowing deployment of AVs on public roads has been increasing, but the vast majority of Americans have yet to trust autonomous vehicles and would not be eager to take advantage of the technology even if the regulations allowed for full deployment nationwide. It is important to note that 90% of all traffic deaths are caused by human error. Therefore, a vehicle programmed to drive itself may end up saving tens of thousands of people that are being killed each and every year.

For the most, the technology that can enable autonomy and disrupt the mobility space is already in existence. With all that in mind, there is no self-driving stack out there that is able to outperform human’s dynamic decision making. While regulations regarding deployment of autonomous vehicles are becoming more favorable, autonomous vehicles as a regular daily mode of transportation are still many years away, and we do not anticipate them to become mainstream well into the late 2020s.

source: https://roboticsbiz.com/stradvisions-ceo-junhwan-kim-talks-about-driver-assistance-technologies/