Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Fetal well-being assessment using conventional tools requires skilled clinicians for interpretation and may be heavily affected by noise if recorded for lengthy durations or by the contaminated maternal effects. In this paper, we propose for the first time a deep learning-based model, fetal heart sounds U-Net (FHSU-NET), for automated extraction of fetal heart activity illustrated as sound waves in raw phonocardiography (PCG). A total of 20 healthy mothers were included in this study to train and validate FHSU-NET following a leave-one-subject-out (LOSO) cross-validation scheme. The model successfully extracted fetal PCG with a median root mean square error (RMSE) of 0.702 [IQR: 0.695-0.706] relative to ground-truth. The median error in heart rate estimated using the ground-truth and FHSU-NET was 18.507 [IQR: 11.996-23.215] with a correlation of 0.642 (p-value = 0.002) and Bland-altman mean difference of 5.18. The proposed model paves the way towards implementing deep learning in clinical settings to decrease the high dependency on medical experts when interpreting lengthy PCG.

Original publication

DOI

10.1109/MLSP55844.2023.10285907

Type

Publication Date

01/01/2023

Volume

2023-September