Tiny machine learning (TinyML) aims to bring machine learning to the extreme edge, i.e., the microcontroller next to the sensor, which is expected to unlock new smart applications and pave the way for the last mile of artificial intelligence. Meanwhile, gesture recognition, as an important technology for realizing human-computer interaction (HCI), is of great significance for enabling convenient interaction between people and smart devices. Currently, the more mature static hand gesture recognition solutions include the use of data gloves or vision cameras to capture static hand gestures, which have many limitations in practical applications, such as cumbersome wearing process and performance affected by lighting conditions, and the processing platform mostly relies on cloud or edge servers. Considering the above limitations, in this paper, we design and implement a static hand gesture recognition system based on an ultra-low resolution infrared array sensor and a low-cost AI chip from the perspective of tiny machine learning applications. On the one hand, a fast method of collecting and labeling sensor data is introduced, and on the other hand, an ultra-lightweight neural network model is customized for the low-cost AI chip. The experimental results show that the static hand gesture recognition system designed and implemented in this paper has 99.14% recognition accuracy for several simple static hand gestures, and its inference time at the microcontroller side is around 35ms, which can achieve accurate and real-time recognition with low cost, strong anti-interference capability and good privacy.