AI experts often overlook the constraints of embedded systems, making it challenging to fit a conventional model into a tiny device. Typical embedded constraints include limited computational resources and power.
To address these, we reduce the complexity of the deep net inference engine by minimizing intra-network connectivity, eliminating floating-point data, and using only accumulation operations.
These small-footprint, low-latency deep nets are suitable for applications in IoT smart sensors measuring inertial, vibration, temperature, flow, electrical, and biochemical data in battery-powered endpoints. Applications include healthcare and industrial wearables, robots, and automotive systems.
CEO at Infxl