You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"d) The capability of a system to acquire, process and apply knowledge"
1072
1072
],
1073
1073
"correct_answer": "D",
1074
-
"k_level": "K2",
1075
-
"justification": "Copy Session in AI Assistant for more info."
1074
+
"justification": {
1075
+
"a": "Incorrect. Autonomy and control handover are important but not the core definition of AI.",
1076
+
"b": "Incorrect. Self-learning without supervision refers to unsupervised learning, which is a subtype, not the full definition.",
1077
+
"c": "Incorrect. Assessing environments and acting on pre-learned behavior describes reactive systems but not AI broadly.",
1078
+
"d": "Correct. The syllabus defines AI as systems that can acquire, process, and apply knowledge to perform tasks."
1079
+
}
1076
1080
},
1077
1081
{
1078
1082
"id": 69,
@@ -1084,8 +1088,12 @@ window.questions = [
1084
1088
"d) The system shall achieve at least 90% accuracy within a specified tolerance range"
1085
1089
],
1086
1090
"correct_answer": "A",
1087
-
"k_level": "K2",
1088
-
"justification": "Copy Session in AI Assistant for more info."
1091
+
"justification": {
1092
+
"a": "Correct. Unexpected scenarios not defined in documentation are difficult to test and validate.",
1093
+
"b": "Incorrect. Performance metrics like timing are testable using standard tools.",
1094
+
"c": "Incorrect. API security can be verified through established test practices.",
1095
+
"d": "Incorrect. Accuracy thresholds can be validated with ground truth data."
1096
+
}
1089
1097
},
1090
1098
{
1091
1099
"id": 70,
@@ -1097,21 +1105,29 @@ window.questions = [
1097
1105
"d) A search engine algorithm prioritizes speed over relevancy"
1098
1106
],
1099
1107
"correct_answer": "B",
1100
-
"k_level": "K2",
1101
-
"justification": "Copy Session in AI Assistant for more info."
1108
+
"justification": {
1109
+
"a": "Incorrect. Avoiding traffic may align with intended goals, not reward exploitation.",
1110
+
"b": "Correct. Maximizing user engagement by generating long responses manipulates metrics rather than improving quality.",
1111
+
"c": "Incorrect. Prioritizing precision is aligned with legitimate goals.",
1112
+
"d": "Incorrect. Prioritizing speed over relevancy is a trade-off, not an exploit."
1113
+
}
1102
1114
},
1103
1115
{
1104
1116
"id": 71,
1105
-
"question": "Connect the quality characteristics with their definitions:\\n1 - Transparency\\n2 - Interpretability\\n3 - Explainability\\n\\nA - The understandability of the AI technology by various stakeholders\\nB - The ease with which the algorithm can be determined\\nC - The clarity with which the user interface displays options and controls for interacting with the AI system\\nD - The ease with which users can determine how the AI-based system generates a particular answer\\n\\nWhich combination is correct?",
1117
+
"question": "Connect the quality characteristics with their definitions:\n1 - Transparency\n2 - Interpretability\n3 - Explainability\n\nA - The understandability of the AI technology by various stakeholders\nB - The ease with which the algorithm can be determined\nC - The clarity with which the user interface displays options and controls for interacting with the AI system\nD - The ease with which users can determine how the AI-based system generates a particular answer\n\nWhich combination is correct?",
1106
1118
"options": [
1107
1119
"a) 1-C; 2-A; 3-B",
1108
1120
"b) 1-B; 2-D; 3-A",
1109
1121
"c) 1-C; 2-A; 3-D",
1110
1122
"d) 1-B; 2-A; 3-D"
1111
1123
],
1112
1124
"correct_answer": "D",
1113
-
"k_level": "K2",
1114
-
"justification": "Copy Session in AI Assistant for more info."
1125
+
"justification": {
1126
+
"a": "Incorrect. Transparency is about visibility of system internals, not UI clarity.",
1127
+
"b": "Incorrect. Interpretability is about understanding system logic, not transparency.",
1128
+
"c": "Incorrect. Explainability relates to how outcomes are explained, not UI design.",
1129
+
"d": "Correct. Transparency = B (algorithm clarity), Interpretability = A (stakeholder understanding), Explainability = D (reasoning for outputs)."
1130
+
}
1115
1131
},
1116
1132
{
1117
1133
"id": 72,
@@ -1123,8 +1139,12 @@ window.questions = [
1123
1139
"d) Model training"
1124
1140
],
1125
1141
"correct_answer": "B",
1126
-
"k_level": "K1",
1127
-
"justification": "Copy Session in AI Assistant for more info."
1142
+
"justification": {
1143
+
"a": "Incorrect. Deployment can introduce bias if assumptions change in real-world use.",
1144
+
"b": "Correct. Testing is meant to detect issues, not introduce them.",
1145
+
"c": "Incorrect. Data collection is a major source of bias (e.g. sampling bias).",
1146
+
"d": "Incorrect. Training on biased data propagates those biases."
1147
+
}
1128
1148
},
1129
1149
{
1130
1150
"id": 73,
@@ -1136,8 +1156,12 @@ window.questions = [
1136
1156
"d) Poorly designed final outcome"
1137
1157
],
1138
1158
"correct_answer": "A",
1139
-
"k_level": "K1",
1140
-
"justification": "Copy Session in AI Assistant for more info."
1159
+
"justification": {
1160
+
"a": "Correct. The ML algorithm itself isn't the typical cause of reward hacking — it's the reward structure.",
1161
+
"b": "Incorrect. Poor reward shaping can lead to unintended behaviors.",
1162
+
"c": "Incorrect. Misleading feedback can cause the model to learn the wrong behaviors.",
1163
+
"d": "Incorrect. An ambiguous final goal may incentivize incorrect optimization."
1164
+
}
1141
1165
},
1142
1166
{
1143
1167
"id": 74,
@@ -1149,8 +1173,12 @@ window.questions = [
1149
1173
"d) Reinforcement learning Agent is no longer considered intelligent as it 'hacks' the task and performs actions which were not intended by the programmer in order to receive a reward"
1150
1174
],
1151
1175
"correct_answer": "D",
1152
-
"k_level": "K1",
1153
-
"justification": "Copy Session in AI Assistant for more info."
1176
+
"justification": {
1177
+
"a": "Incorrect. This reflects a real phenomenon where successful AI is considered 'just software'.",
1178
+
"b": "Incorrect. Some people do dismiss non-neural approaches, which fits the AI Effect.",
1179
+
"c": "Incorrect. Game AI using simple logic being seen as non-AI aligns with the concept.",
1180
+
"d": "Correct. Reward hacking is a testing/failure issue, not a reason to deny the system is intelligent."
1181
+
}
1154
1182
},
1155
1183
{
1156
1184
"id": 75,
@@ -1163,7 +1191,12 @@ window.questions = [
1163
1191
],
1164
1192
"correct_answer": "C",
1165
1193
"k_level": "K1",
1166
-
"justification": "Copy Session in AI Assistant for more info."
1194
+
"justification": {
1195
+
"a": "Incorrect. 'Weak AI' is synonymous with Narrow AI, which can still be very capable—it does not imply poor performance.",
1196
+
"b": "Incorrect. LLMs are powerful but still classified as Narrow AI because they lack general cognitive abilities.",
1197
+
"c": "Correct. General AI (AGI) refers to a system capable of performing any intellectual task that a human can do.",
1198
+
"d": "Incorrect. Limited context does not necessarily lead to incorrect outputs if the AI is well-trained for that domain."
1199
+
}
1167
1200
},
1168
1201
{
1169
1202
"id": 76,
@@ -1176,7 +1209,12 @@ window.questions = [
1176
1209
],
1177
1210
"correct_answer": "D",
1178
1211
"k_level": "K1",
1179
-
"justification": "Copy Session in AI Assistant for more info."
1212
+
"justification": {
1213
+
"a": "Incorrect. Scalability analysis is useful but doesn’t directly relate to compression or network depth.",
1214
+
"b": "Incorrect. While balancing performance and resources is essential, it’s broader than the focus on depth and efficiency.",
1215
+
"c": "Incorrect. Update mechanisms are part of maintainability, not compression or network design.",
1216
+
"d": "Correct. Network depth is directly tied to both representational capacity and computational efficiency in DNNs."
1217
+
}
1180
1218
},
1181
1219
{
1182
1220
"id": 77,
@@ -1189,7 +1227,12 @@ window.questions = [
1189
1227
],
1190
1228
"correct_answer": "A",
1191
1229
"k_level": "K1",
1192
-
"justification": "Copy Session in AI Assistant for more info."
1230
+
"justification": {
1231
+
"a": "Correct. Increasing depth improves hierarchical feature extraction and model accuracy (to a point).",
1232
+
"b": "Incorrect. More complex models often require more—not less—training data.",
1233
+
"c": "Incorrect. Deeper models reduce interpretability due to added abstraction layers.",
1234
+
"d": "Incorrect. Efficiency may decrease with depth unless offset by architectural optimizations."
1235
+
}
1193
1236
},
1194
1237
{
1195
1238
"id": 78,
@@ -1202,7 +1245,12 @@ window.questions = [
1202
1245
],
1203
1246
"correct_answer": "D",
1204
1247
"k_level": "K1",
1205
-
"justification": "Copy Session in AI Assistant for more info."
1248
+
"justification": {
1249
+
"a": "Incorrect. Back-to-back testing compares results but fails with evolving systems due to non-determinism.",
1250
+
"b": "Incorrect. It is not always more efficient—output variability hinders direct comparisons.",
1251
+
"c": "Incorrect. Metamorphic testing isn’t universally better; it depends on the use case.",
1252
+
"d": "Correct. Metamorphic testing is ideal for validating self-learning systems with expected behavior patterns."
1253
+
}
1206
1254
},
1207
1255
{
1208
1256
"id": 79,
@@ -1215,7 +1263,12 @@ window.questions = [
1215
1263
],
1216
1264
"correct_answer": "C",
1217
1265
"k_level": "K2",
1218
-
"justification": "Copy Session in AI Assistant for more info."
1266
+
"justification": {
1267
+
"a": "Incorrect. Rule-based systems lack learning capability and do not constitute AIaaS.",
1268
+
"b": "Incorrect. Fixed pricing rules are deterministic, not adaptive AI behaviors.",
1269
+
"c": "Correct. Adaptive APIs like collaborative filtering exemplify AIaaS by learning from user behavior.",
1270
+
"d": "Incorrect. Static dashboards may use data but don’t involve AI or learning."
1271
+
}
1219
1272
},
1220
1273
{
1221
1274
"id": 80,
@@ -1228,7 +1281,12 @@ window.questions = [
1228
1281
],
1229
1282
"correct_answer": "D",
1230
1283
"k_level": "K2",
1231
-
"justification": "Copy Session in AI Assistant for more info."
1284
+
"justification": {
1285
+
"a": "Incorrect. Rule-based systems are static and not AI-driven.",
1286
+
"b": "Incorrect. Basic scheduling is not adaptive unless explicitly machine-learned.",
1287
+
"c": "Incorrect. Pre-defined tagging lacks the learning feedback loop of AI systems.",
1288
+
"d": "Correct. Neural machine translation adapts to data patterns and improves over time, matching AIaaS characteristics."
1289
+
}
1232
1290
},
1233
1291
{
1234
1292
"id": 81,
@@ -1241,7 +1299,12 @@ window.questions = [
1241
1299
],
1242
1300
"correct_answer": "A",
1243
1301
"k_level": "K1",
1244
-
"justification": "Copy Session in AI Assistant for more info."
1302
+
"justification": {
1303
+
"a": "Correct. Legal contracts often omit model accuracy due to probabilistic and changing nature of ML outcomes.",
1304
+
"b": "Incorrect. This is a true statement, not a reason for avoiding accuracy commitments.",
1305
+
"c": "Incorrect. Uptime guarantees are common and expected in service agreements.",
1306
+
"d": "Incorrect. Security and availability SLAs are routinely included in contracts."
1307
+
}
1245
1308
},
1246
1309
{
1247
1310
"id": 82,
@@ -1254,7 +1317,12 @@ window.questions = [
1254
1317
],
1255
1318
"correct_answer": "D",
1256
1319
"k_level": "K2",
1257
-
"justification": "Copy Session in AI Assistant for more info."
1320
+
"justification": {
1321
+
"a": "Incorrect. Pre-trained models still need evaluation against acceptance criteria.",
1322
+
"b": "Incorrect. Choosing an unsuitable model architecture can still introduce project risks.",
1323
+
"c": "Incorrect. Bias may persist even in pre-trained models, especially if training data was biased.",
1324
+
"d": "Correct. Reusing a pre-trained model reduces development effort and project risk."
1325
+
}
1258
1326
},
1259
1327
{
1260
1328
"id": 83,
@@ -1267,7 +1335,12 @@ window.questions = [
1267
1335
],
1268
1336
"correct_answer": "B",
1269
1337
"k_level": "K1",
1270
-
"justification": "Copy Session in AI Assistant for more info."
1338
+
"justification": {
1339
+
"a": "Incorrect. ImageNet is commonly used for training and benchmarking vision models.",
1340
+
"b": "Correct. Using a general-purpose API without tuning can lead to poor results in specific domains.",
1341
+
"c": "Incorrect. Using embedded pre-trained models is often acceptable if validated.",
1342
+
"d": "Incorrect. Modifying and repurposing models is a core transfer learning strategy."
1343
+
}
1271
1344
},
1272
1345
{
1273
1346
"id": 84,
@@ -1280,7 +1353,12 @@ window.questions = [
1280
1353
],
1281
1354
"correct_answer": "B",
1282
1355
"k_level": "K2",
1283
-
"justification": "Copy Session in AI Assistant for more info."
1356
+
"justification": {
1357
+
"a": "Incorrect. Transfer learning is common for many ML types, not just DNNs.",
1358
+
"b": "Correct. The effectiveness of transfer learning depends on similarity between source and target domains.",
1359
+
"c": "Incorrect. Tasks don’t need to be identical—just related.",
1360
+
"d": "Incorrect. A small amount of labeled data in the new domain is typically still needed."
1361
+
}
1284
1362
},
1285
1363
{
1286
1364
"id": 85,
@@ -1293,7 +1371,12 @@ window.questions = [
1293
1371
],
1294
1372
"correct_answer": "D",
1295
1373
"k_level": "K2",
1296
-
"justification": "Copy Session in AI Assistant for more info."
1374
+
"justification": {
1375
+
"a": "Incorrect. Reinforcement learning may not solve performance drops from concept drift.",
1376
+
"b": "Incorrect. The model may have been suitable initially but failed due to changing data.",
1377
+
"c": "Incorrect. Underfitting would show poor performance from the beginning, not after good initial results.",
1378
+
"d": "Correct. Concept drift occurs when data distribution changes over time, degrading model accuracy."
1379
+
}
1297
1380
},
1298
1381
{
1299
1382
"id": 86,
@@ -1306,7 +1389,12 @@ window.questions = [
1306
1389
],
1307
1390
"correct_answer": "A",
1308
1391
"k_level": "K1",
1309
-
"justification": "Copy Session in AI Assistant for more info."
1392
+
"justification": {
1393
+
"a": "Correct. Pairwise testing is efficient for finding interaction bugs with fewer test cases than exhaustive testing.",
1394
+
"b": "Incorrect. Exhaustive testing is rarely practical due to combinatorial explosion.",
1395
+
"c": "Incorrect. Pairwise testing is practical and widely used in software testing.",
1396
+
"d": "Incorrect. Automated tools can speed up pairwise testing significantly."
1397
+
}
1310
1398
},
1311
1399
{
1312
1400
"id": 87,
@@ -1319,7 +1407,12 @@ window.questions = [
1319
1407
],
1320
1408
"correct_answer": "D",
1321
1409
"k_level": "K1",
1322
-
"justification": "Copy Session in AI Assistant for more info."
1410
+
"justification": {
1411
+
"a": "Incorrect. AI helps optimize test execution and analysis, reducing—not prolonging—test cycles.",
1412
+
"b": "Incorrect. AI enhances testing strategies but doesn’t eliminate the need for human involvement.",
1413
+
"c": "Incorrect. AI doesn’t introduce defects when used responsibly and correctly.",
1414
+
"d": "Correct. AI can optimize test suites by identifying redundant or low-value cases."
1415
+
}
1323
1416
},
1324
1417
{
1325
1418
"id": 88,
@@ -1332,7 +1425,12 @@ window.questions = [
1332
1425
],
1333
1426
"correct_answer": "D",
1334
1427
"k_level": "K2",
1335
-
"justification": "Copy Session in AI Assistant for more info."
1428
+
"justification": {
1429
+
"a": "Incorrect. Proper data representation is necessary but doesn’t address real-world dynamics.",
1430
+
"b": "Incorrect. Deferring adaptability testing post-deployment increases production risk.",
1431
+
"c": "Incorrect. Testing phases individually may miss system integration issues.",
1432
+
"d": "Correct. Self-learning systems should be tested in realistic, variable environments to ensure safe adaptation."
1433
+
}
1336
1434
},
1337
1435
{
1338
1436
"id": 89,
@@ -1345,15 +1443,25 @@ window.questions = [
1345
1443
],
1346
1444
"correct_answer": "A",
1347
1445
"k_level": "K2",
1348
-
"justification": "Copy Session in AI Assistant for more info."
1446
+
"justification": {
1447
+
"a": "Correct. Simply increasing word size (e.g., 32-bit to 64-bit) doesn’t directly improve AI model accuracy and can increase inference time.",
1448
+
"b": "Incorrect. Edge devices increasingly support AI-specific hardware like NPUs.",
1449
+
"c": "Incorrect. Google offers specialized AI hardware like TPUs as cloud services.",
1450
+
"d": "Incorrect. Neuromorphic processors are explicitly designed to diverge from von Neumann architecture."
1451
+
}
1349
1452
},
1350
1453
{
1351
1454
"id": 90,
1352
1455
"question": "To run a small-scale AI model on a PC, which hardware is most suitable?",
"justification": "Copy Session in AI Assistant for more info."
1459
+
"justification": {
1460
+
"a": "Incorrect. CPUs are general-purpose and not optimal for parallel AI workloads.",
1461
+
"b": "Correct. GPUs are well-suited for AI due to massive parallelism and are widely available for local experimentation.",
1462
+
"c": "Incorrect. NPUs are emerging but not yet common in personal devices.",
1463
+
"d": "Incorrect. TPUs are mainly available in cloud platforms, not personal PCs."
1464
+
}
1357
1465
},
1358
1466
{
1359
1467
"id": 91,
@@ -1366,7 +1474,12 @@ window.questions = [
1366
1474
],
1367
1475
"correct_answer": "B",
1368
1476
"k_level": "K1",
1369
-
"justification": "Copy Session in AI Assistant for more info."
1477
+
"justification": {
1478
+
"a": "Incorrect. ASICs are rarely exposed as general-purpose cloud services due to their inflexibility.",
1479
+
"b": "Correct. TPUs are widely used in cloud AI services and are optimized for deep learning tasks.",
1480
+
"c": "Incorrect. NPUs are used more in edge and mobile, not dominant in cloud yet.",
1481
+
"d": "Incorrect. SoCs (System-on-Chip) are integration platforms, not standalone AI accelerators."
1482
+
}
1370
1483
},
1371
1484
{
1372
1485
"id": 92,
@@ -2152,4 +2265,4 @@ window.questions = [
2152
2265
"d": "Is not correct. Such an app has the potential to be unfair to vulnerable groups, such as those with disabilities and it may also create unwanted pressure on employees"
0 commit comments