![](https://www.aitechinsights.com/wp-content/uploads/2024/07/1720039257_Toybrick-TB-RK1808S0-AI-Calculation-Stick-RK1808-NPU-Processor-for-deep.jpg)
![](https://www.aitechinsights.com/wp-content/uploads/2024/07/Toybrick-TB-RK1808S0-AI-Calculation-Stick-RK1808-NPU-Processor-for-deep.jpg)
![](https://www.aitechinsights.com/wp-content/uploads/2024/07/1720039255_29_Toybrick-TB-RK1808S0-AI-Calculation-Stick-RK1808-NPU-Processor-for-deep.jpg)
![](https://www.aitechinsights.com/wp-content/uploads/2024/07/1720039255_864_Toybrick-TB-RK1808S0-AI-Calculation-Stick-RK1808-NPU-Processor-for-deep.jpg)
![](https://www.aitechinsights.com/wp-content/uploads/2024/07/1720039256_762_Toybrick-TB-RK1808S0-AI-Calculation-Stick-RK1808-NPU-Processor-for-deep.jpg)
![](https://www.aitechinsights.com/wp-content/uploads/2024/07/1720039256_198_Toybrick-TB-RK1808S0-AI-Calculation-Stick-RK1808-NPU-Processor-for-deep.jpg)
![](https://www.aitechinsights.com/wp-content/uploads/2024/07/1720039256_420_Toybrick-TB-RK1808S0-AI-Calculation-Stick-RK1808-NPU-Processor-for-deep.jpg)
![](https://www.aitechinsights.com/wp-content/uploads/2024/07/1720039256_109_Toybrick-TB-RK1808S0-AI-Calculation-Stick-RK1808-NPU-Processor-for-deep.jpg)
![](https://www.aitechinsights.com/wp-content/uploads/2024/07/1720039256_665_Toybrick-TB-RK1808S0-AI-Calculation-Stick-RK1808-NPU-Processor-for-deep.jpg)
![](https://www.aitechinsights.com/wp-content/uploads/2024/07/Toybrick-TB-RK1808S0-AI-Calculation-Stick-RK1808-NPU-Processor-for-deep.png)
![](https://www.aitechinsights.com/wp-content/uploads/2024/07/Toybrick-TB-RK1808S0-AI-Calculation-Stick-RK1808-NPU-Processor-for-deep.jpg)
![](https://www.aitechinsights.com/wp-content/uploads/2024/07/1720039255_29_Toybrick-TB-RK1808S0-AI-Calculation-Stick-RK1808-NPU-Processor-for-deep.jpg)
![](https://www.aitechinsights.com/wp-content/uploads/2024/07/1720039255_864_Toybrick-TB-RK1808S0-AI-Calculation-Stick-RK1808-NPU-Processor-for-deep.jpg)
![](https://www.aitechinsights.com/wp-content/uploads/2024/07/1720039256_762_Toybrick-TB-RK1808S0-AI-Calculation-Stick-RK1808-NPU-Processor-for-deep.jpg)
![](https://www.aitechinsights.com/wp-content/uploads/2024/07/1720039256_198_Toybrick-TB-RK1808S0-AI-Calculation-Stick-RK1808-NPU-Processor-for-deep.jpg)
![](https://www.aitechinsights.com/wp-content/uploads/2024/07/1720039256_420_Toybrick-TB-RK1808S0-AI-Calculation-Stick-RK1808-NPU-Processor-for-deep.jpg)
![](https://www.aitechinsights.com/wp-content/uploads/2024/07/1720039256_109_Toybrick-TB-RK1808S0-AI-Calculation-Stick-RK1808-NPU-Processor-for-deep.jpg)
![](https://www.aitechinsights.com/wp-content/uploads/2024/07/1720039256_665_Toybrick-TB-RK1808S0-AI-Calculation-Stick-RK1808-NPU-Processor-for-deep.jpg)
(as of Jul 03, 2024 20:40:57 UTC – Details)
![](https://www.aitechinsights.com/wp-content/uploads/2024/07/Toybrick-TB-RK1808S0-AI-Calculation-Stick-RK1808-NPU-Processor-for-deep.png)
STRONG COMPATIBILITY: Supports network model transformation for a range of frameworks such as the Caffe/Tensorflow framework.
LOWER POWER CONSUMPTION: The chip CPU adopts dual-core Cortex-A35 architecture and 22nm FD-SOI process. The power consumption of the same performance can be reduced by about 30% compared with the mainstream 28nm process.
DEVELOPMENT FRIENDLY: support Linux system, AI application development SDK supports C / C + + and Python, convenient for developers to convert from floating point to fixed point network and debugging, development is very convenient.
SCALABILITY: Support multiple device overlays on the same platform to extend host performance.