# Creating Skills Challenges

This documentation assumes that you're already familiar with authoring regular scenarios, especially the index.json file and scenario syntax.

# How a Skills Challenge differs from a Scenario

A scenario is a step-by-step, interactive lesson on how to solve a real-world problem.

A skills challenge is an opportunity for learners to solve a real problem by themselves, helping them gain a deeper understanding of how to apply a given tool or approach.

As such, challenges don’t contain an instructional component (no lesson text), but instead present tasks to be completed. As the learner completes each task, the UI can provide real-time feedback to encourage the learner to keep making progress. The UI can even provide hints to help guide the learner when they get stuck.

To illustrate, here are two examples in O'Reilly's learning platform:

In a skills challenge, the sidebar can also provide hints — such as when learners appear to be stuck, and are taking too long to complete a task. But any hints should merely prompt them or nudge them in the right direction, not provide a definitive answer or solution to the task.

Also, skills challenges, despite the name, shouldn't necessarily be “challenging” — in the sense of being difficult. Someone who has already learned the necessary skills should be able to easily complete the corresponding skills challenge.

This completability is essential to how skills challenges function as learning tools: If the learner is able to complete all the tasks, then they know they have learned those skills. (Hooray!) If they cannot complete the tasks or experience trouble along the way, then they know they need to resume studying and practicing.

(The hint text can even be used to point people back to relevant learning resources. For example, "Don't recall the command for initializing a new server? Revisit Chapter 3 in Servers & Widgets." We don't want people to get stuck in a learning dead-end; we always want to help provide a path forward, especially when things "aren't working.")

# Example Skills Challenges

The skills challenges UI shown to learners looks like this:

Katacoda Challenge Example

Once the learner has completed all the tasks, they can go back and review a task's details by clicking its title.

Katacoda Challenge Example Completed

Please reference the examples here:

To get started, you can either clone the skills challenge template in the examples above, or you may want to start by…

# Reconfiguring a Scenario as a Skills Challenge

To convert an existing scenario to a skills challenge:

# 1. Specify the challenge version.

Add "type": "[email protected]" to the top of the index.json file, as in this example source.

{
    "type": "[email protected]",
    "title": "Skills Challenge Template",
    "description": "Basic template for a skills challenge",
    …

This tells Katacoda that the scenario will be a skills challenge, as well as which version of the challenge API to use.

⚠️ At the moment, the only valid version is 0.7.

# 2. Specify a verification script for each task.

While regular scenarios have "steps," the challenges API interprets steps as "tasks." Your index.json should continue to specify an array of steps (tasks), but you'll need to specify a verify script for each one (and, optionally, a hint), as in this example source:

{
    "type": "[email protected]",
    "title": "Skills Challenge Template",
    "description": "Basic template for a skills challenge",
    "difficulty": "Beginner",
    "time": "5 minutes",
    "details": {
        "steps": [
            {
                "title": "Task 1: Bananas",
                "text": "1_task.md",
                "verify": "1_verify.sh",  // <-- New
                "hint": "1_hint.sh"       // <-- Optional
            },
            {
                "title": "Task 2: Apples",
                "text": "2_task.md",
                "verify": "2_verify.sh",  // <-- New
                "hint": "2_hint.sh"       // <-- Optional
            }
        ],

# 3. Update each title and text value.

The title should briefly summarize the task at hand. Lead with the verb, indicating the action to be taken. For example:

  • Create a new .config file
  • Increase the widget capacity

The text property points to a Markdown .md file, which will provide a more detailed prompt for the task. Keep these brief, while providing any specifics needed to complete the task successfully, without telling the learner exactly how to do it. For example:

  • Be sure the new .config file is in the default app directory.
  • Update the cluster settings to ensure at least 100 additional widgets can be accommodated.

# 3. Write a verification script for each task.

The verify property points to a Bash shell script .sh file. This script is evaluated continuously in the background until it returns an exit code of zero (success), at which point the task is flagged as completed, and the challenge proceeds to display the next task. There are no parameters passed to the verification script, and the script is expected to return the standard zero for success or non-zero for a failure.

While testing, it can be convenient to manually run any verification scripts in the foreground, to ensure you get the expected exit code. Note that in Bash, you can check the exit code of the most recently run command by typing echo $?.

For example, if bananas.txt doesn't exist, this will return an exit code of 1 (not success):

$ test -f ./bananas.txt
$ echo $?
1

Or, more concisely, with both commands on a single line, separated by a semicolon:

$ test -f ./bananas.txt; echo $?
1

As soon as we create bananas.txt, the next time the verification test is run, it will pass, returning an exit code of zero (success):

$ touch bananas.txt
$ test -f ./bananas.txt; echo $?
0

# Simple or Complex Tests

Your test commands can be something simple, like checking for the presence of a file, or more complex, like executing a more elaborate .sh script that, in turn, runs other commands (such as regular expressions matching or test suites), as needed.

You are free to implement the verification logic in any language from natively within the script, to calling out to Python, Go, Node.js or any other language, script, or tool to perform the verification. For most purposes, native Bash script commands directly in the verification script are sufficient.

This verification test logic is similar to Katacoda's "Verified Steps" feature:

  • Example scenario: https://katacoda.com/scenario-examples/scenarios/verified-steps
  • Example source: https://github.com/katacoda/scenario-examples/tree/main/verified-steps

# Verification Test Commands Run Continuously

Your test command will be auto-executed about once per second. For quick-to-execute tests, this works well! Something slower — like compiling an entire application — will introduce delay to the UI.

For example, if the test command recompiles the learner's program, and that process takes ~30 seconds to execute, then there will be a delay of at least 30 seconds before the learner is told that they completed the task.

We realize this is not ideal. For now, faster-to-execute tests are better. In the future, we may introduce an option to "click to verify", so instead of auto-running the test command repeatedly, we only run it when the learner has indicated they think they are done with the task (just as with our existing "Verified Steps" feature).

# 4. Optionally, define any hints.

The optional hint property points to a Bash shell script .sh file.

Hints are useful to provide learners with an indication of how to unblock themselves if they appear to be taking a long time or have missed something in their solution which is stopping them from proceeding.

As the hint scripts are run within the environment, they can fully inspect and interact with the state of the environment and provide learners with targeted contextually related information. For example, if the task is to deploy an application to Kubernetes, you can inspect if the learner has deployed an app but made a mistake with the Container Image used. A hint can be used to prompt the learner to fix the Image name before they can continue.

An example of a Hint script is below. This prompts the learner after 5 seconds that a Hint will soon be displayed. After 10 seconds, it indicates to use cURL.

seconds_sofar=$1

echo "Debug Hint Task 1: $seconds_sofar"

if [[ $seconds_sofar -ge 5 &&  $seconds_sofar -lt 10 ]]; then
  echo "Keep going, a hint will be shown soon..."
fi


if [ $seconds_sofar -ge 10 ]; then
  echo "Hint: try running the command:"
  echo "curl node01:30080"
fi

# Hints and Tips

# Keep the task focused

A challenge should provide learners with focused tasks. If the challenge is complex then it should be broken down into multiple tasks to match real-life.

# File naming conventions

We strongly recommend numbering all task-related files. For example:

  • 1_hint.sh
  • 1_task.md
  • 1_verify.sh
  • 2_hint.sh
  • 2_task.md
  • 2_verify.sh

Any output to the learner can include links which will automatically be rendered to be clickable. This can be useful to provide learners with additional information or resources.

# 😸 Emoji 🌈

Emojis are supported. 🎉 🏆