Working with Logs

Introduction

When it comes to logging, sadly an often undervalued topic, Azure Function Apps have quite a broad scope of different mechanisms available, including Streaming Logs, Diagnostic Logs, Scale controller logs and Azure Monitor metrics.

In the last chapter we were already using Azure Monitor metrics to query information about the health check status.

In this chapter, we're going to focus solely on the basics of Streaming Logs, which come in two flavours:

  • Built-in log streaming lets you view a stream of your application log files. This stream is equivalent to the output seen when you debug your functions during local development. This streaming method supports only a single instance and can't be used with an app running on Linux in a Consumption plan.
  • Live Metrics streaming lets you view log data and other metrics in near real time when your Function App is connected to Application Insights.

We're also going to look into how to run only specific functions, disabling functions and so on.

Commands in this chapter are to be executed in the Function App root directory unless stated otherwise.

Look under the hood

Before we take a look at the logs, we take a little detour and talk a little about trace levels and log levels, learn the difference and write a little function to write some logs.

Trace levels

Trace levels define what kind of message we send to our logs stream.

One of the reasons why we're focusing solely on the basics of streaming logs is because trace levels in Azure Functions depend on your language of choice.

Nevertheless, the following four basic levels are available in most languages:

  1. Information: should be purely informative and not looking into them on a regular basis shouldn’t result in missing any important information. These logs should have long-term value.
  2. Warning: should be used when something unexpected happens, but the function can continue the work
  3. Error: should be used when the function hits an issue preventing one or more functionalities from properly executing
  4. Trace / Verbose: should be used for events considered to be useful during software debugging when more granular information is needed

In Typescript / Javascript we can access the logger via the context as shown in the table below.

Trace LevelWrite with
Informationcontext.log.info(message) or context.log(message)
Warningcontext.log.warn(message)
Errorcontext.log.error(message)
Verbosecontext.log.verbose(message)

We've already used context.log and if you took a closer look to the logs from the runtime process you should have already seen some of these messages.

Note: Don't mistake console.log for context.log. Because output from console.log is captured at the function app level, it's not tied to a specific function invocation and isn't displayed in a specific function's logs.

Log Levels

Log levels define what kind of messages we want to capture in our logs.

While there are only four trace levels, there are seven log levels. That's because the trace levels are language-specific, whereas the log levels are runtime-specific. The Function App runtime is written in .NET, which knows all seven levels for logs and traces.

The six log levels are Trace, Debug, Information, Warning, Error, Critical, None, whereas Trace is the highest level and None, of course, the lowest. The level you choose includes all lower levels, except for None which will deactivate logging completely. So, if you choose Warning, for example, you would also include Error and Critical log messages.

Log levels can be configured for different categories. The important ones for us right now are only Function and default, though. Function specifies the log level for your functions and default for all categories not configured otherwise.

The log levels can be configured in the host.json, so on the scale of the whole Function App. Nevertheless, we can specify the log level per function and will do so in a later task.

Sample host.json
{
  "version": "2.0",
  "logging": {
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled": true,
        "excludedTypes": "Request"
      }
    },
    "logLevel": {
      "default": "Information",
      "Host.Results": "Warning",
      "Function": "Trace",
      "Host.Aggregator": "Warning"
    }
  },
  "extensionBundle": {
    "id": "Microsoft.Azure.Functions.ExtensionBundle",
    "version": "[4.0.0, 5.0.0)"
  }
}

As already mentioned, log levels are runtime-specific, so changing the log level (for functions) will impact all your functions. We'll learn how to work around that later in this chapter.

Let's add the log levels to the table

Trace LevelWrite withLog Level
Informationcontext.log.info(message) or context.log(message)Information
Warningcontext.log.warn(message)Warning
Errorcontext.log.error(message)Error
Verbosecontext.log.verbose(message)Trace

The only surprise here is that Verbose correlates to Trace, not to Debug as one might have expected.

Note: The host.json is included when deploying the Function App, so remember to reset your settings before you deploy it. Or, as we do later in this chapter, overwrite the host.json settings in your local.settings.json as described here.

Creating a log function

Now, we want to add a function which automatically writes log data once in a while.

🛠 TASK: Create a Timer Trigger function

Let's add another function to write some log messages. But this time we'll create a Timer Trigger function rather then an HTTP Trigger.

A Timer Trigger is triggered at scheduled times. The schedule is defined via a Cron Expressions and the expression is configured in the function.json.

If you need a verbal interpretation of a cron expression, you can use a cron expression generator website.

Sample function.json
{
  "bindings": [
    {
      "name": "myTimer",
      "type": "timerTrigger",
      "direction": "in",
      "schedule": "0 */5 * * * *"
    }
  ],
  "scriptFile": "../dist/logs/index.js"
}

🎓 SOLUTION

The function can be easily created using the Timer Trigger template. The difference this time is, that we don't need to specify the authlevel, because the function is not called by an external source.

Create the function with:

func new --name logs --template "Timer trigger"

🛠 TASK: Update the function code

Next, we want to update the function code to write a log message for each trace level.

🎓 SOLUTION

Update the code (snippet):

const timerTrigger: AzureFunction = async function (
  context: Context,
  myTimer: any
): Promise<void> {
  const functionName = context.executionContext.functionName;

  context.log.verbose(`Timer trigger: ${JSON.stringify(myTimer)}`);

  context.log.info(`=> Information from function "${functionName}"`);
  context.log.warn(`=> Warning from function "${functionName}"`);
  context.log.error(`=> Error from function "${functionName}"`);
};

🛠 TASK: Configure which functions to run

The task is to start the Function App, but let it run the logs function only.

There're two possibilities to do so, you could either disable all other functions in the local.settings.json, or you could define which functions to run in the host.json. So it's either a denylist or an allowlist.

We want to use a allowlist, but we want to run just some local tests. So we don't want to change the host.json file because it's deployed to Azure. Therefore, we need a possibility to overwrite host.json configuration locally.

Check out the documentation and create an allowlist, including only the logs function.

Start the function, after changing the settings, to see the log messages. Depending on how patient, or rather how impatient you are, you might also want to update the cron schedule to run the function more often, or take a little break.

💡 HINT
  • You need overwrite host settings for functions
  • Overwritten host settings always start with AzureFunctionsJobHost
  • Next you need to follow the json path to the specific setting and replace every opening curly bracket ({) with tow underscores (__)
  • The settings for function is an array and arrays are are numbered starting with zero (0)

🎓 SOLUTION

Update the local.settings.json with:

func settings add "AzureFunctionsJobHost__functions__0" "logs"

For the sake of completion, disabling functions can be done with:

# Noteworthy is the difference on the first level.
# Function settings are overwritten starting with "AzureWebJobs"."FunctionName"...
func settings add "AzureWebJobs.greetings.Disabled" "true"

Note: Some types of logging buffer write to the log file, which can result in out of order events in the stream.

🛠 TASK: Update the log level

As you've seen in the previous task, the logs don't contain the verbose messages yet.

Update the log level to trace for our logs function, but not for any other function.

💡 HINT
  • You need overwrite host settings for logging
  • See the answer of the previous task for further details how to do so

🎓 SOLUTION

Update the host.json (snippet):

{
  "version": "2.0",
  "logging": {
    "applicationInsights": {
      ...
    },
    "logLevel": {
      "Function.logs": "Trace",
    }
  },
  "extensionBundle": {
    ...
  }
}

For the sake of completion, updating the local.settings.json could be done with:

func settings add "AzureFunctionsJobHost__logging__LogLevel__Function.logs" "Trace"

Streaming logs

At long last, our logs function automatically writes some logs we can work with. Let's get into it!

Built-in log streaming

Built-in log streaming lets you view a stream of your application log files. This is not yet supported for Linux apps in the Consumption plan.

However, that's not an issue for us, because we're luckily using an App Service plan to run our functions.

Use the logstream command to show your Function App logs on the command line:

func azure functionapp logstream <APP_NAME>

Wait for a moment, until you get the message:

Starting Live Log Stream ---

Finally, publish your Function App to Azure.

You'll see some messages like:

[INFO]  Starting OpenBSD Secure Shell server: sshd.
[INFO]  Hosting environment: Production
[INFO]  Content root path: /azure-functions-host
[INFO]  Now listening on: http://[::]:80
[INFO]  Application started. Press Ctrl+C to shut down.
No new trace in the past 1 min(s).

These logs only show messages written by the Function App itself, but not for a specific function.

So let's stop the execution with Ctrl+c and move on.

Live Metrics streaming

Live Metrics streaming lets you view log data and other metrics in near real time. However, it only works for Function Apps connected to Application Insights. And, as you might have guessed, we already did that.

You can start the live metrics view with:

func azure functionapp logstream <APP_NAME> --browser

The command will open the the live metrics view for your function app in your browser and you can see your incoming requests as well as the messages from our logs function.

Note: The browser view might fail with Data is temporarily inaccessible. Try to deactivate your ad blocker, cookie blocker and so on for portal.azure.com and it should work as expected.

Query Application Insights

As final step in this chapter we want to query our log messages from Application Insights.

You can use either the query command, or the events show command to receive messages for your instance.

Let's look at both options in detail.

Note: New messages are not instantly displayed in Application Insights. You need to wait a couple of minutes for messages to be available.

🛠 TASK (optional): Query your instance name

You can skip this one if you didn't change the name of your Application Insights instance when you created our Azure resources.

If you changed the name, query it and replace ains-training-demo with your <INSTANCE_NAME> in the commands below.

🎓 SOLUTION

Query the name of your Application Insights instance:

az resource list --resource-group rg-functions-demo --query "[?type=='Microsoft.Insights/components'].name"

Using query

The command uses the Kusto Query Language (KQL), which you can use in Azure Portal for Azure Monitor as well as Azure Data Explorer.

Take the latest 2 results within the last 5 minutes with:

az monitor app-insights query --resource-group rg-functions-demo --app ains-training-demo --analytics-query 'traces | sort by timestamp desc | take 2' --offset 5m

The output is one or more tables represented as JSON. It's very powerful, but the output on the command line is not optimal for further processing.

When it comes to our trace levels, you can find the messages for a specific level by querying for the specific severityLevel.

We can query our verbose messages during the last 10 minutes with:

az monitor app-insights query --resource-group rg-functions-demo --app ains-training-demo --analytics-query 'traces | where severityLevel == 0 and operation_Name has "logs" | project message, timestamp | sort by timestamp desc' --offset 10m

You will see some of your Timer trigger messages written with context.log.verbose(message). If you updated the host.json to log verbose messages for the logs function, that is.

Using events show

The command can be used with the global --query parameter to filter the messages. It uses a JMESPath query string to filter the messages.

List messages from the last 5 minutes with:

az monitor app-insights events show --resource-group rg-functions-demo --app ains-training-demo --type traces --offset 5m

We can queries our verbose messages, during the last 10 minutes with:

az monitor app-insights events show --resource-group rg-functions-demo --app ains-training-demo --type traces --offset 10m --query "value[?operation.name=='logs'].[trace, timestamp]"

Note: A recommended alternative, especially if you work with more CLIs using JSON as output format, is using jq and bat. It's more fun with these two extraordinary tools.