Activity Recognition Dataset ---------------------------- (c) 2011, Susanna Pirttikangas, Kaori Fujinami, Jaakko Suutala Please cite the related article [1], when using the dataset. Dataset ------- File "activitydata.txt" contains pre-calculated features from 3D accleration sensors, attached to subject's necklace, right knee, left wrist, right wrist. From each 3x4 sensor channels mean and standard deviation in 0.7 seconds (non-overlapping) window are calculated, procuding 24-dimensional feature vector at each time step. Initial sapmling rate was 10Hz. Dataset is in form n x d+1 (n number of examples, d dimension), where each row presents one feature vector and each column the value of particular feature, last column being a class label (i.e, the activity to be recognized). Type of each feature and order number are presented below: From Necklace Right Knee Left Wrist Right Wrist (1) mean(AccX) (7) mean(AccX) (13) mean(AccX) (19) mean(AccX) (2) mean(AccY) (8) mean(AccY) (14) mean(AccY) (20) mean(AccY) (3) mean(AccZ) (9) mean(AccZ) (15) mean(AccZ) (21) mean(AccZ) (4) std(AccX) (10) std(AccX) (16) std(AccX) (22) std(AccX) (5) std(AccY) (11) std(AccY) (17) std(AccY) (23) std(AccY) (6) std(AccZ) (12) std(AccZ) (18) std(AccZ) (24) std(AccZ) Activity labels and the number of examples in each category are presented below. Dataset contains 13 subjects (P1-P3) and the feature vector rows are ordered similarly. For example, if you want to find the starting time of P1's "sit and relax"-activity, you can calculate it by 55+107+121=283. More details about the dataset and experiments see [1],[2], and [3]. Activity P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 --------------------------------------------------------------------------------------------------------------------------- 1=clean white board 55 78 77 97 98 99 106 42 39 81 24 96 98 2=sit and read news 107 99 103 102 103 100 105 104 104 100 137 95 116 3=stand 121 92 94 92 94 90 95 95 91 96 87 90 90 4=sit and relax 96 94 99 97 97 92 104 99 99 94 95 93 95 5=sit and watch TV 121 173 169 170 52 163 183 169 173 163 164 173 169 6=drink 56 74 106 157 48 474 201 63 27 55 48 154 74 7=brush your teeth 93 94 91 94 98 92 98 98 91 90 97 100 97 8=lie down 94 105 37 96 98 77 102 96 93 14 69 100 25 9=vacuum cleaning 218 257 230 193 213 138 253 254 137 168 176 113 185 10=type 95 95 113 110 101 34 131 101 93 37 95 95 110 11=walk 238 772 349 166 271 113 246 334 112 0 310 378 71 12=walk stairs up 35 36 31 35 30 23 33 38 30 17 44 51 10 13=walk stairs down 31 29 30 29 28 16 30 29 25 34 40 24 9 14=elevator up 32 28 29 30 16 42 24 31 23 54 26 52 26 15=elevator down 19 26 31 29 20 24 20 28 24 27 27 25 24 16=run 15 25 23 8 14 20 29 17 16 0 19 17 18 17=cycle 0 69 114 63 36 74 87 126 59 48 30 86 173 References ---------- [1] Pirttikangas S., Fujinami K. & Nakajima T. (2006) Feature selection and activity recognition from wearable sensors. Proc. H.Y. Youn, M. Kim, and H. Morikawa (Eds.):UCS 2006, LNCS 4239, Springer-Verlag Berlin Heidelberg, pp. 516-527. [2] Suutala J., Pirttikangas S. & Röning J. (2007) Discriminative temporal smoothing for activity recognition from wearable sensors. Proc. 4th International Symposium on Ubiquitous Computing Systems (UCS07), Tokyo, Japan, pp. 182-195. [3] Suutala J. (2012) Learning Discriminative Models from Structured Multi-sensor Data for Human Context Recognition, Doctoral Thesis, University of Oulu.