Dedicated to The Autonomous Vehicles Technology
***Please click on session title to view presenation***
Plenary Hall
Chair: Chen Sagiv, SagivTech
Amnon Shashua, President & CEO of Mobileye and Senior Vice President, Intel Corporation
Danny Shapiro, Senior Director of Automotive, NVIDIA
Benny Daniel, Vice President - Consulting, Mobility-Europe, Frost & Sullivan
Omer David Keilaf, CEO & Co-Founder, Innoviz Technologies
Chair: Nate Jaret, Maniv Mobility
Kobi Marenko, Co-founder & CEO, Arbe Robotics
Shmoolik Mangan, Algorithms Development Manager, VayaVision
Hall 2
Chair: Ran Gazit, Genral Motors
Gila Kamhi, Research Lab Group Manager, User Experience Technologies, General Motors
Guy Raz, CTO, Guardian Optical Technologies
Paulo Resende, P2/P3 R&D Product Technical Leader , Valeo France
Hall 3
Chair: Koby Cohen
Orr Danon, CEO and VP R&D, Hailo
Zohar Fox, Co-founder and CEO, Aurora Labs
Inbal Toren, Senior Product Manager, eyeSight Technologies
Chair: Prof. Gabby Sarusi, Ben - Gurion University
Alex Shulman, Director of Products, Foresight
Lior Cohen, CTO & Co-founder, Ride Vision
Avi Bakal, CEO & Co-founder, TriEye
Amit Benjamin, Product Manager, Director, Texas Instruments
Chair: Gadi Hornstein, Israel Innovation Authority
Amir Freund, Chief Product Officer, Otonomo
Danny Atsmon, CEO & Founder, Cognata
Daniel Rezvani , Security Engineer, Argus Cyber Security
Moshe Shlisel , CEO & Co-Founder, GuardKnox
Chair: David Abraham, Robert Bosch GmbH
Hilla Tavor, Senior Director, Advanced Development , Mobileye
Zvi Shiller, Chair, Department of Mechanical Engineering and Mechatronics, Ariel University and the director of the Paslin Laboratory for Robotics and Autonomous Vehicles
Michael Lipka, Manager Technology Planning, Huawei Technologies
Mathias Burger, Reinforcement Learning & Planning, Bosch Center for Artificial Intelligence
Chair: Rutie Adar, Samsung
Aharon Aharon, CEO, Israel Innovation Authority
Oren Betzaleli, General Manager, Harman Israel
Micha Risling, SVP Marketing and Business Development, Head of the Automotive Business Unit, Valens
Bruno Fernandez-Ruiz, Co-Founder & CTO, Nexar Inc.
Adham Ghazali, CEO & Co-Founder, Imagry
Ayman Mouallem, Sr. Functional Safety Automation Engineer
Xavier Perrotton, Software Department Manager at Driving Assistance Research, Valeo
Tal Ben David, VP R&D & Co-Founder, Karamba Security
Yoni Kahana, VP Customers, NanoLock
Rami Khawaly, Co-Founder and CTO, MindoLife
Moritz von Grotthuss, General Site Manager, Gestigon
Yoav Hollander, Founder and CTO, Foretellix
Chair: Micha Risling, Valens
Asaf Moses, Technical Product Manager, Systematics Ltd.
Yaniv Sulkes, VP Business Development and Marketing, North America & Europe, Autotalks
Ophir Herbst, CEO, Jungo Connectivity Ltd.
Anat Lea Bonshtien, Panel Moderator, Chairman & Director, Fuel Choices & Smart Mobility Initiative, Prime Minister's Office
Rutie Adar, Head of Samsung Strategy and Innovation Center
Michal Varkat Wolkin, Head of Israel Office, Investments and Innovation, Lear Corporation
Yahal Zilka, Managing Partner, Magma
Danielle Holtz, Director of Business Development, Maniv
Raj Rajkumar, George Westinghouse Professor of Electrical and Computer Engineering; Director, T-SET University Transportation Center; Director, Real-Time and Multimedia Systems Lab
President & CEO of Mobileye and Senior Vice President, Intel Corporation
Prof. Amnon Shashua holds the Sachs chair in computer science at the Hebrew University of Jerusalem. His field of expertise is computer vision and machine learning. Amnon has founded three startups in the computer vision and machine learning fields. In 1995 he founded CogniTens that specializes in the area of industrial metrology and is today a division of the Swedish Corporation Hexagon. In 1999 he cofounded Mobileye with his partner Ziv Aviram. Mobileye develops system-on-chips and computer vision algorithms for driving assistance systems and is developing a platform for autonomous driving to be launched in 2021. Today, approximately 32 million cars rely on Mobileye technology to make their vehicles safer to drive. In August 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising $1B at a market cap of $5.3B. In August 2017, Mobileye became an Intel company in the largest Israeli acquisition deal ever of $15.3B. Today, Prof. Shashua is the President & CEO of Mobileye and a Senior Vice President of Intel Corporation. In 2010 Amnon co-founded OrCam which harnesses computer vision and artificial intelligence to assist people who are visually impaired or blind.
Senior Director of Automotive, NVIDIA
Danny Shapiro is Senior Director of Automotive at NVIDIA, focusing on artificial intelligence (AI) solutions for the development and deployment of safe self-driving cars, trucks and shuttles. The NVIDIA automotive team is engaged with over 370 car and truck makers, tier 1 suppliers, HD mapping companies, sensor companies and startup companies that are all using the company's DRIVE hardware and software platform for autonomous vehicle development and deployment.
Danny serves on the advisory boards of the Los Angeles Auto Show, the Connected Car Council and Udacity. He holds a Bachelor of Science in electrical engineering and computer science from Princeton University and an MBA from the Haas School of Business at UC Berkeley. Danny lives in Northern California where his home solar system charges his electric, AI self-driving car.
Vice President - Consulting, Mobility-Europe, Frost & Sullivan
Fully autonomous vehicles are coming in the not-so-distant future, and it won’t be long before self-driving cars are available to everyone. But what will it take for autonomous driving to go mainstream? What technology is necessary to make it happen? How can the industry make prices reasonable for the masses? What obstacles stand in our way, and how can we overcome them? In this session, Innoviz CEO, Omer Keilaf, will answer these questions as others as he presents a roadmap for achieving mass commercialization of autonomous vehicles.
CEO & Co-Founder, Innoviz Technologies
The autonomous driving industry requires a sensor that performs at real time in all lighting and weather conditions. In addition, in a world where autonomous cars may drive one towards the other on highways at high speeds, the sensor must be able to “see” them coming from over 300 meters away, track velocity, and detect distance. In this presentation, Kobi Marenko will explain why Imaging Radar is the only technology that can overcome these challenges, and discuss what role Radar will have in an autonomously driven future.
Kobi will focus on how to increase public confidence in the autonomous market, while pushing the industry to develop further. He will specifically discuss the challenges that need to be addressed to achieve a next generation of Radars, such as sensing the road with both an ultra-high resolution and a wide field of view, resolving ambiguities, achieving low false alarm rates, coping with mutual radar interference, while keeping prices low and reliability high.
Co-founder & CEO, Arbe Robotics
Algorithms Development Manager, VayaVision
User experience in the era of intelligent vehicles is shifting from the driver to the passenger. We can leverage from in-out cabin sensing/perception abilities of the intelligent vehicles and put human in-the loop and sense/perceive user in his various roles (driver, passenger, road user) in order to automatically adjust vehicle behavior to enhance user experience. In the new world of intelligent vehicles – we first need to address the needs of the passenger/driver – and target a natural and personalized in-cabin CAR HMI leveraging from the ability of the vehicle to sense and perceive the cabin and the passengers via inward facing sensors. Then we need to address the needs of the other humans – road users; ensure that new world of mobility - a world where we humans are surrounded by a growing population of intelligent vehicles is a safer world where traffic flows - advancing the quality of life.
Research Lab Group Manager, User Experience Technologies, General Motors
As a main goal of vehicles is to transport passengers, the well-being, safety and comfort of the passengers inside a car is of major importance. This becomes especially true when we envision the era of autonomous vehicles, and specifically autonomous public transportation systems such as autonomous taxis, in which there is no driver in the car to assume this responsibility.
At Guardian Optical Technologies we develop an in car multi-sensor to provide rich passenger data. Our sensor combines video images, depth data and micro-scale vibration sensing into one device. Based on these three layers of information we build different applications that can analyze and provide information on different aspects of what’s going on inside the vehicle cabin. The combination of these three different, complementary modalities give us great robustness and reliability.
As an example, a “forgotten infant” application monitors the inside of a locked vehicle to detect if any child (or pet) was left behind. We do this by utilizing the great sensitivity of the micro-scale vibration layer, capable of detecting breathing or even heart-beat motion even with no direct line of sight. Add with the depth and vision layers this result with unmatched detection robustness.
As another example, an “occupancy” detector is used to replace the current seat-belt reminder sensor. We use our sensor not only to detect that there is a person on the seat (and not just a heavy bag), but also to classify their mass and age, and to indicate if they are sitting correctly or are out of position, for smart airbag deployment in case of emergency.
The list of possible applications goes on. It starts with basic passengers and driver monitoring and ends with complex behavior analysis, such as restlessness or violence of passengers. We achieve this level of capabilities by employing state of the art machine learning and neural network algorithms, training the system with big data collection. In this talk we will demonstrate some of our applications and explain how we are able to obtain them.
CTO, Guardian Optical Technologies
P2/P3 R&D Product Technical Leader , Valeo France
CEO and VP R&D, Hailo
Co-founder and CEO, Aurora Labs
Senior Product Manager, eyeSight Technologies
Director of Products, Foresight
CTO & Co-founder, Ride Vision
CEO & Co-founder, TriEye
***VIEW PRESENTATION HERE***
This presentation describes the current ADAS/AV sensing technology market, survey the advantages and disadvantages of each sensor type in light of few of the accidents which recently occurred, and then introduce the Imaging Radar sensor, a new sensor technology, that we believe will be a must have sensor in level 3, 4 & 5 AV. In the 2nd part of the presentation, we will go deeper and explain how cascade multiple TI single chip radars to a high performance radar sensor. The cascade radar system can support both MIMO and TX beam forming modes for high angle resolution and long detection range. The high accuracy phase shifter enables actively beam steering towards desired angle of interest. Some field test results are presented based on TI 4-chip cascade evaluation board to demonstrate the achieved performance
Product Manager, Director, Texas Instruments
Chief Product Officer, Otonomo
CEO & Founder, Cognata
Hacking Automotive Ethernet Cameras Daniel Rezvani, Argus Cyber Security
Autonomous vehicles rely on a range of sensors to interact with the world around them. One of the most noticeable sensors is the Ethernet camera - a standard camera which is used for vision-based ADAS (advanced driver assistance system). Since this camera has become a critical part of an autonomous car’s safety (it is responsible for identifying nearby hazards, traffic signs, etc.), the consequences of a successfull cyber-attack launched against such a camera can be devastating, resulting in real physical injury or death. In this technical presentation, you will learn how our team of automotive cyber security researchers were able to easily hack an Ethernet camera (similar to those being integrated into today's connected vehicles) and trick it into thinking the pre-recorded video is reality.
Security Engineer, Argus Cyber Security
CEO & Co-Founder, GuardKnox
Senior Director, Advanced Development , Mobileye
Chair, Department of Mechanical Engineering and Mechatronics, Ariel University and the director of the Paslin Laboratory for Robotics and Autonomous Vehicles
Individual transportation is hit by three mega trends, which are electrification, automation, and the digital transformation. All these three mega trends are also vital questions for the traditional car manufacturer and therefore of tremendous impact on the automotive eco-system existing for more than 100 years. Electrification simplifies the drive train enabling new player in the automotive market on the one side of competition while automation will place a taxi ride into a fully new competitive position against private car ownership. In consequence car sharing offerings in major economies will be boosted, and in turn sharing customers will be open for new modalities like the electrical vertical take-off and landing (eVTOL) vehicles being on the horizon as a convenient alternative for medium distances. Despite both approaches, privately owned cars from traditional OEMs and any car sharing fleet, will become electric and autonomous, the technological approach might be different. Traditional OEMs are used to offer their customers flexible individual all-in-one vehicles and therefore developing a corresponding stand-alone autonomous driving experience. In contrast car sharing operators are focused on return on invest within a fleet approach. Consequently fleet automation will follow other paradigms in comparison to car automation and will therefore drive other technological solutions. A supporting road and back end infrastructure in case of fleet automation incl. flying vehicles, decoupled from energy consumption constrains of the BEV electronics, drives potentially other solutions as currently developed within car industry. Fleet operation will especially benefit from the digital transformation driving administrative expenditures to a very low end.
A draft scenario of these developments is going to be presented as a discussion base within the community to develop a joint understanding of future technology demand. Will these developments drive competition or will the community see a cooperative environment? We intend to initiate a discussion on technology development for the future automobile eco-system.
Manager Technology Planning, Huawei Technologies
Reinforcement Learning & Planning, Bosch Center for Artificial Intelligence
CEO, Israel Innovation Authority
General Manager, Harman Israel
SVP Marketing and Business Development, Head of the Automotive Business Unit, Valens
The robustness of end-to-end driving policy models depends on having access to the largest possible training dataset, exposing the true diversity of the 10 trillion miles that humans drive every year in the real world. However, current approaches are limited to models trained using homogenous data from a small number of vehicles running in controlled environments or in simulation, which fail to perform adequately in real-world dangerous corner cases. Safe driving requires continuously resolving a long tail of those corner cases. The only possible way to train a robust driving policy model is therefore to continuously capture as many of these cases as possible. The capture of driving data is unfortunately constrained by the reduced compute capabilities of the devices running at the edge and the limited network connectivity to the cloud, making the task of building robust end-to-end driving policies very complex.
Bruno Fernandez-Ruiz offers an overview of a network of connected devices deployed at the edge running deep learning models that continuously capture, select, and transfer to the cloud “interesting” monocular camera observations, vehicle motion, and driver actions. The collected data is used to train an end-to-end vehicle driving policy, which also guarantees that the information gain of the learned model is monotonically increasing, effectively becoming progressively more selective of the data captured by the edge devices as it walks down the tail of corner cases.
Co-Founder & CTO, Nexar Inc.
Bruno Fernandez-Ruiz is cofounder and CTO at Nexar, where he and his team are using large-scale machine learning and machine vision to capture and analyze millions of sensor and camera readings in order to make our roads safer. Previously, Bruno was a senior fellow at Yahoo, where he oversaw the development and delivery of Yahoo’s personalization, ad targeting, and native advertising teams; his prior roles at Yahoo included chief architect for Yahoo’s cloud and platform and chief architect for international. Prior to joining Yahoo, Bruno founded OneSoup (acquired by Synchronica and now part of the Myriad Group) and YamiGo; was an enterprise architect for Fidelity Investments; served as manager in Accenture’s Center for Strategic Research Group, where he cofounded Meridea Financial Services and Accenture’s claim software solutions group. Bruno holds an MSc in operations research and transportation science from MIT, with a focus on intelligent transportation systems.
Autonomous driving has come a long way albeit it's success is limited only to those areas where a High-Definition map is pre-built and known. Inspired by the recent achievements in Artificial Intelligence particularly of methods that combine trees and DNNs (e.g. Alpha Go Zero), this talk demonstrates how to effectively combine Deep Learning and convention path planning to drive in unknown areas.
CEO & Co-Founder, Imagry
Sr. Functional Safety Automation Engineer
Software Department Manager at Driving Assistance Research, Valeo
VP R&D & Co-Founder, Karamba Security
VP Customers, NanoLock
Co-Founder and CTO, MindoLife
Most expect the car to be the third place of living aside from our home and our office. Longer distances to commute, more traffic, more time in the car is creating this reality already today. But how can we avoid that this development turns out to be our personal mobile dystopia? How do we create a place to be and a place where you like to be? HMI, AR, Interior Cocoon and new perceptual safety features are core to create a Mobile Livingroom 2.0. - These concepts will be introduced, explained and discussed.
General Site Manager, Gestigon
That tragic Uber accident has brought Autonomous Vehicle (AV) safety into sharp focus. It raised awareness of aspects like how people are more afraid of things they can’t control, the need for third-party testing, the insurance implications of all this and so on. Regardless of the specifics of this incident, this presentation will look at the bigger picture of AV deployment, safety and verification, and expand on the following claims:
Beyond a certain safety threshold, AVs should be deployed While safer than human drivers, AVs will continue to have many fatal accidents AV manufacturers and regulators should employ a well-thought-out, comprehensive, continuously-improving, multi-execution-platform, transparent verification system
There seem to be a need for the various stakeholders (the public, lawmakers, regulators, AV manufacturers etc.) to agree on some general framework for handling these accidents (and the whole deployment process). That framework should ensure, among other things, that:
Not every accident results in a lengthy, billion-dollar lawsuit Negligent AV manufacturers do get punished Everybody (the public, the press, judges, lawmakers, regulators etc.) has an understandable way to scrutinize the safety of various AVs, both in general and as it relates to a specific accident scenario
This is going to be a non-trivial framework: It will surely have legal and regulatory components. It will probably include ISO-style “process” standards, such as ISO 26262, the SOTIF follow-on, and the expected “SOTIF-for-AVs” follow-on to that. It may contain a formal component and more. But the central component (tying all others together) is probably going to be a verification system. The presentation will describe how an AV verification system lets you:
Define a comprehensive, continuously-updated library of parameterized scenarios Run variations of each scenario many times against the AV-in-question, using a proper mix of execution platforms (such as simulation, test tracks etc.) Evaluate the aggregate of all these scenarios / runs (and any requested subset of it), to transparently understand what was verified (this is called “coverage”) and what “grade” it got
Such a coverage-driven verification system (enhanced by ML-based techniques) is probably our best bet. It will also be a crucial component for improving safety as quickly as The presentation describes the main attributes of such a system. A Scenario Description Language will be described, and a tools suite to utilize it and create a path to safer autonomous vehicles is described and presented. Specific focus is given to regulatory aspects of such a language, and its potential usage as a certification tool.
Founder and CTO, Foretellix
https://www.systematics.co.il/products/mathworks/main/
An autonomous system is a system that functions independently or in a supervised manner and operates under conditions of uncertainty, in an unknown and unpredictable dynamic environment.
Autonomous systems may accumulate new knowledge or adapt themselves to a changing environment. In order to complete their mission, these systems can acquire information from their surroundings, move independently in their environment (real or virtual), avoid dangerous situations and act for a long period time without human intervention.
MathWorks Model-Based Design (MBD) approach, which is based on MATLAB & Simulink, includes many capabilities that allow us to design, simulate and test autonomous systems in a simple and comfortable workflow.
This lecture will review a number of capabilities from the autonomous domain. These capabilities will allow us to transform the designed system into an autonomous system that can make decisions independently. During this session, we will present several possibilities for designing an automated system, define and combine different sensors (Sensor Fusion), create perception capabilities in order for the system to better understand its environment, planning and following optimal trajectories, and finally, how our autonomous system can interface with other environments.
This MBD workflow allows us to save a significant amount of time during the development process, up to the prototype stage and on.
Technical Product Manager, Systematics Ltd.
Asaf Moses serves as a technical product manager at Systematics Ltd. and responsible for MathWorks Autonomous Systems products family. As part of this position, Asaf integrates his extensive experience with different companies, in order to accompany and assist their developing process. Disciplines of expertise: Aeronautics, Control, Physical Modeling and Robotics. Asaf Moses holds a B.Sc. in Mechanical Engineering from Ben-Gurion University and an MBA from the Hebrew University of Jerusalem.
***VIEW THE PRESENTATION HERE ***
This presentation will discuss what will happen on our roads until we reach the stage (if at all) when all vehicles will be autonomous. It will present how and why autonomous and manned vehicles must find a way to share our roadways. Questions will be raised regarding whether or not autonomous vehicles can predict human behaviour and vice-versa, can human drivers anticipate the intentions of autonomous vehicles. Other issues will be raised regarding how to protect vulnerable road users and whether “extra” protection will be needed for motorcycles, bicycles and pedestrians. V2X will be presented as a solution and additional benefits of this technology will be highlighted including improved safety, mobility, and emissions. A conclusion will be presented discussing all the main reasons why V2X is an essential technology on the way to full vehicle autonomy.
VP Business Development and Marketing, North America & Europe, Autotalks
CEO, Jungo Connectivity Ltd.
- Serial entrepreneur, with over 20 years of operational and technology-driven experience
- CEO and founder of Jungo Connectivity, spinoff from Cisco, developing ground-breaking computer-vision automotive driver monitoring product
- GM at Jungo (acquired by NDS $107m); Executive at NDS (acquired by Cisco, $5b)
- Founder and CEO of Mathtools, specializing in MATLAB compilers and related technologies (acquired by MathWorks)
- B.Sc in Electrical Engineering (cumma sum laude) from the Technion, Israel Institute of Technology
- World-class Bridge player, holding international wins, often representing Israel in international competitions
Panel Moderator, Chairman & Director, Fuel Choices & Smart Mobility Initiative, Prime Minister's Office
Head of Samsung Strategy and Innovation Center
Head of Israel Office, Investments and Innovation, Lear Corporation
Managing Partner, Magma
Director of Business Development, Maniv
Danielle manages Maniv’s Business Development efforts, overseeing Maniv’s portfolio support and managing the fund’s relationship with its Limited Partners.
Before joining Maniv, Danielle was the Manager of Partnerships at OurCrowd, one of the world’s leading equity crowdfunding platforms. As one of OurCrowd's early employees, Danielle played a critical role in shaping and implementing the company's business development efforts seeing the company through three financing rounds that raised over $200M. Danielle holds a B.A. in Business Administration & Italian Literature from The Hebrew University of Jerusalem. She also attended the Universita` Di Perugia in Italy where she received a Certificate of Proficiency in Italian.
George Westinghouse Professor of Electrical and Computer Engineering; Director, T-SET University Transportation Center; Director, Real-Time and Multimedia Systems Lab