Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Assessing fetal well-being using conventional tools requires skilled clinicians for interpretation and can be susceptible to noise interference, especially during lengthy recordings or when maternal effects contaminate the signals. In this study, we present a novel transformer-based deep learning model called fetal heart sounds U-Net Transformer (FHSU-NETR) for automated extraction of fetal heart activity from raw phonocardiography (PCG) signals. The model was trained using a realistic synthetic dataset and validated on data recorded from 20 healthy mothers at the pregnancy outpatient clinic of Tohoku University Hospital, Japan. The model successfully extracted fetal PCG signals; achieving a heart rate mean difference of-1.5 bpm compared to the ground-truth calculated from fetal electrocardiogram (ECG). By leveraging deep learning, FHSU-NETR would facilitates timely interpretation of lengthy PCG recordings while reducing the heavy reliance on medical experts; thereby enhancing the efficiency in clinical practice.

Original publication

DOI

10.22489/CinC.2023.026

Type

Publication Date

01/01/2023