Abstract Biological visual systems exhibit exceptional capabilities in spatiotemporal information processing, offering a compelling paradigm for advancing artificial intelligence perception. Current machine vision systems face fundamental limitations in data handling efficiency, which may be addressed by emerging retina‐inspired neuromorphic hardware, namely retinomorphic devices, through the synergistic integration of vision sensing, non‐volatile memory, and in situ data processing, thereby minimizing data storage and transmission latency. Here, a programmable retinomorphic device based on graphene‐contacted MoS 2 field‐effect transistors is demonstrated, where a CMOS‐compatible oxygen‐activation strategy realizes a functional charge trapping medium layer. Such a charge trapping/de‐trapping mechanism enables excellent non‐volatile memory performance, including robust cycling endurance (>100 cycles) and good air stability (>400 days). Additionally, the gate voltage enables programmable modulation of photosensitive synaptic plasticity, allowing for the reversible switching of the artificial synapse between silent and active states. Notably, time‐dependent photocurrent encodes the motion features of moving honeybees, integrating spatiotemporal information at the device level to enhance subsequent processing efficiency. The compressed features are decoded by an adaptive convolutional neural network to execute three‐level perception‐reasoning tasks. This work introduces a bio‐inspired information encoding paradigm analogous to honeybee communication, establishing a hardware foundation for neuromorphic visual systems requiring highly efficient spatial information processing and transmission.