PMID- 34288917 OWN - NLM STAT- MEDLINE DCOM- 20211109 LR - 20211109 IS - 1932-6203 (Electronic) IS - 1932-6203 (Linking) VI - 16 IP - 7 DP - 2021 TI - Simple benchmarking method for determining the accuracy of depth cameras in body landmark location estimation: Static upright posture as a measurement example. PG - e0254814 LID - 10.1371/journal.pone.0254814 [doi] LID - e0254814 AB - To evaluate the postures in ergonomics applications, studies have proposed the use of low-cost, marker-less, and portable depth camera-based motion tracking systems (DCMTSs) as a potential alternative to conventional marker-based motion tracking systems (MMTSs). However, a simple but systematic method for examining the estimation errors of various DCMTSs is lacking. This paper proposes a benchmarking method for assessing the estimation accuracy of depth cameras for full-body landmark location estimation. A novel alignment board was fabricated to align the coordinate systems of the DCMTSs and MMTSs. The data from an MMTS were used as a reference to quantify the error of using a DCMTS to identify target locations in a 3-D space. To demonstrate the proposed method, the full-body landmark location tracking errors were evaluated for a static upright posture using two different DCMTSs. For each landmark, we compared each DCMTS (Kinect system and RealSense system) with an MMTS by calculating the Euclidean distances between symmetrical landmarks. The evaluation trials were performed twice. The agreement between the tracking errors of the two evaluation trials was assessed using intraclass correlation coefficient (ICC). The results indicate that the proposed method can effectively assess the tracking performance of DCMTSs. The average errors (standard deviation) for the Kinect system and RealSense system were 2.80 (1.03) cm and 5.14 (1.49) cm, respectively. The highest average error values were observed in the depth orientation for both DCMTSs. The proposed method achieved high reliability with ICCs of 0.97 and 0.92 for the Kinect system and RealSense system, respectively. FAU - Liu, Pin-Ling AU - Liu PL AUID- ORCID: 0000-0001-8147-3309 AD - Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu, Taiwan. FAU - Chang, Chien-Chi AU - Chang CC AUID- ORCID: 0000-0002-6476-4232 AD - Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu, Taiwan. FAU - Lin, Jia-Hua AU - Lin JH AUID- ORCID: 0000-0001-8908-9328 AD - Washington State Department of Labor and Industries, Olympia, Washington, United States of America. FAU - Kobayashi, Yoshiyuki AU - Kobayashi Y AD - Human Augmentation Research Center, National Institute of Advanced Industrial Science and Technology, Tokyo, Japan. LA - eng PT - Journal Article PT - Research Support, Non-U.S. Gov't DEP - 20210721 PL - United States TA - PLoS One JT - PloS one JID - 101285081 SB - IM MH - *Gait MH - Humans MH - *Imaging, Three-Dimensional MH - *Motion MH - *Posture MH - *Software PMC - PMC8294549 COIS- The authors have declared that no competing interests exist. EDAT- 2021/07/22 06:00 MHDA- 2021/11/10 06:00 PMCR- 2021/07/21 CRDT- 2021/07/21 17:24 PHST- 2021/02/19 00:00 [received] PHST- 2021/07/04 00:00 [accepted] PHST- 2021/07/21 17:24 [entrez] PHST- 2021/07/22 06:00 [pubmed] PHST- 2021/11/10 06:00 [medline] PHST- 2021/07/21 00:00 [pmc-release] AID - PONE-D-21-04220 [pii] AID - 10.1371/journal.pone.0254814 [doi] PST - epublish SO - PLoS One. 2021 Jul 21;16(7):e0254814. doi: 10.1371/journal.pone.0254814. eCollection 2021.