Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

We introduce a dual-hormone control algorithm for people with Type 1 Diabetes (T1D) which uses deep reinforcement learning (RL). Specifically, double dilated recurrent neural networks are used to learn the control strategy, trained by a variant of Q-learning. The inputs to the model include the real-time sensed glucose and meal carbohydrate content, and the outputs are the actions necessary to deliver dual-hormone (basal insulin and glucagon) control. Without prior knowledge of the glucose-insulin metabolism, we develop a data-driven model using the UVA/Padova Simulator. We first pre-train a generalized model using long-term exploration in an environment with average T1D subject parameters provided by the simulator, then adopt importance sampling to train personalized models for each individual. In-silico, the proposed algorithm largely reduces adverse glycemic events, and achieves time in range, i.e., the percentage of normoglycemia, for the adults and for the adolescents, which outperforms previous approaches significantly. These results indicate that deep RL has great potential to improve the treatment of chronic diseases such as diabetes.

Original publication

DOI

10.1007/978-3-030-53352-6_5

Type

Conference paper

Publication Date

01/01/2021

Volume

914

Pages

45 - 53