Best Practices for Function Development
Writing efficient and maintainable functions is key to building robust applications on the LivePerson Functions platform. While the platform manages the underlying infrastructure, the quality of your code directly impacts the user experience. Poorly optimized functions can lead to increased latency, timeouts, and unreliable behavior for your end users.
This guide outlines core strategies to ensure your functions are performant, resilient, and easy to maintain.
1. Function idempotency
As documented in our Event Source Overview, certain event sources may re-invoke your function with the same payload if they detect an error or timeout. While the platform typically triggers a function once per event, network anomalies or retry policies can lead to multiple invocations.
Goal: Ensure that processing the same event multiple times does not result in harmful side effects (e.g., duplicate orders, repeated customer messages).
Strategies:
- Verify State: Before performing an action, check if it has already been completed.
- Use Session Storage: Leverage the Context Session Store to save a "processed" flag or state for a specific conversation or event ID. If a function is re-triggered, check this store to skip redundant logic.
2. Performance
It is crucial to remember that functions run in a constrained environment. By default, a function instance is allocated 256 MiB of memory and approximately 0.083 vCPU.
- Resource Awareness: Unlike a powerful local developer machine, your function has limited processing power. Operations that seem instant locally may take longer in the cloud.
-
Data Limits: Be mindful of the data you process. Loading large datasets into memory can quickly lead to
Out of Memorycrashes. Always stream data or process it in chunks where possible.
2.1 Parallelizing calls
Node.js is designed for asynchronous I/O. A common pitfall is awaiting independent asynchronous operations sequentially, which unnecessarily increases execution time.
Recommendation: Use Promise.all or Promise.allSettled to execute independent tasks concurrently.
- Promise.all: Stops execution immediately if any of the promises fail (rejects). Useful when all tasks are required for the subsequent logic.
-
Promise.allSettled: Waits for all promises to finish, regardless of whether they succeeded or failed. This is useful when you want to ensure all independent operations are attempted, but it requires you to manually check the
status(fulfilled or rejected) of each result.
// ❌ Slower: Sequential execution
const userProfile = await getUserProfile(userId);
const accountStatus = await getAccountStatus(userId);
// ✅ Faster: Parallel execution (fails fast)
const [userProfile, accountStatus] = await Promise.all([
getUserProfile(userId),
getAccountStatus(userId)
]);
// ✅ Robust: Parallel execution (waits for all)
const results = await Promise.allSettled([
getUserProfile(userId),
getAccountStatus(userId)
]);
// Note: You must check results[i].status === 'fulfilled'
2.2 Avoid blocking CPU
Node.js uses a single-threaded Event Loop. If you execute CPU-intensive code (e.g., complex calculations, heavy cryptographic operations, or large synchronous loops), you block the entire thread. This prevents the function from handling other tasks, such as I/O callbacks, leading to timeouts.
- Strict Timeout: Functions have a hard 30-second execution limit. Unlike previous versions, the environment is frozen immediately when the timeout is reached. Any pending tasks will be abruptly terminated.
- Offloading: If you need to perform heavy computation, consider offloading it to a dedicated external service rather than running it inside the function.
- Orchestrated Function: You can use an additional function to offload tasks to it. The root function's timeout will not increase. Therefore this is only recommended in "fire-and-forget" tasks.
2.3 Caching (secrets & clients)
Initializing connections and retrieving secrets takes time. Optimizing how you manage these resources can significantly improve performance.
2.4 Improving Cold Start performance
Functions are aggressively scaled up and down by the platform. This means that restarts of your function happen regularly. It is important to make these starts as quick as possible to keep the functions' reactivity high.
In this example the lpClient is correctly created:
import { Toolbelt } from "core-functions-toolbelt";
// ✅ Global variables allows caching between function calls
let lpClient;
async function lambda(input) {
if (!lpClient) {
// ✅ Lazy initialization when the function is actually called
lpClient = Toolbelt.LpClient();
}
}
However, initializing static content should be done in the global scope as redeclaring will cost compute resources:
// ✅ Declare static content early
const HEADERS = {
'Content-Type': 'application/json',
'Authorization': `Bearer ${process.env.API_KEY}`
};
async function lambda(input) {
const response = await fetch(`${API_URL}/data`, { headers: HEADERS });
}
Furthermore try to be as explicit as possible when importing packages. This will reduce the function's memory footprint and therefore its Cold Start performance:
import * as es from 'es-toolkit'; // ❌ Don't import the entire package
import { last } from 'es-toolkit/array'; // ✅ Only import the functionality you need
async function lambda(input) {
const numbers = [1, 2, 3, 4, 5];
return last(numbers);
}
2.5 Fail early, Fail fast
If you want your function to perform quickly and efficiently consider analysing functions under the following two criteria:
- Which critical calls can potentially fail?
- Can these calls be done firstly (fail early) and/or time boxed (fail fast)?
Take this example:
async function lambda(input) {
// ✅ Fail Early: do the critical paths first and exit early if necessary
try {
const response = await fetchWithTimeout("https://api.example.com", {}, 10_000);
// 2. Validate the response (fetch doesn't throw on 404/500)
if (!response.ok) {
throw new Error(`API Error: ${response.status}`);
}
criticalData = await response.json();
} catch(error) {
console.error("Failed critical call", error);
// Fail immediately, preventing further execution
throw new Error("Could not complete task: Dependency failed");
}
// do your computations...
return criticalData;
}
// ✅ Fail Fast: adding a timeout wrapper to fetch will allow you to cancel a call early
const fetchWithTimeout = async (url, options = {}, timeoutMs = 5000) => {
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), timeoutMs);
try {
const response = await fetch(url, {
...options,
signal: controller.signal
});
return response;
} finally {
clearTimeout(timeoutId);
}
};
Secrets caching
The SecretClient automatically caches secrets in memory for 5 minutes by default.
- Instance-Local: This cache is local to the specific running instance. If your function scales to multiple instances, they do not share this cache.
-
Consistency vs. Speed: If one instance updates a secret, others may still hold the old value until their cache expires. Only bypass the cache (
{ useCache: false }) if strict, immediate consistency is required, as this will degrade performance.
3. Maintainable code
3.1 Configurability using environment variables
Avoid hardcoding configuration values (like URLs, timeouts, or non-sensitive keys) directly in your code.
- Best Practice: Use Environment Variables to decouple configuration from logic. This allows you to adjust settings without modifying and redeploying the code.
-
Constraints:
- Limit: Maximum of 512 variables per function.
-
Type: Values are always strings. Ensure you parse them correctly (e.g.,
parseInt(process.env.MAX_RETRIES)). - Naming: Follow POSIX standards (uppercase letters, digits, underscores).
4. Robustness & error handling
Design your functions to handle "unhappy paths." External APIs may go down, rate limits may be reached, and inputs may be malformed.
Strategies:
- Graceful Degradation: If a non-critical dependency fails (e.g., a secondary enrichment service), your function should ideally continue to work and provide partial value rather than crashing completely.
-
Try-Catch Blocks: Always wrap asynchronous calls—especially those from the Toolbelt or external
fetchrequests—intry-catchblocks. - Validate Inputs: Never assume the input payload is perfect. Check for the existence of required fields before accessing them to avoid runtime errors.
try {
const result = await someExternalApi();
} catch (error) {
console.error("External API failed", { error: error.message });
// Fallback logic or graceful exit
return "Service currently unavailable, please try again later.";
}
Good error handling is crucial to avoid unintended code executions but it alone might not be enough to determine issues at runtime.
Therefore a good error message answers the following questions:
- What happened? (High-level summary)
- Where did it happen? (Component/Function name)
- Why did it happen? (The variable or state that caused it)
For example:
Error: undefined is not an object // ❌ Unusable information
[UserService] Failed to validate user registration: 'email' field is missing from payload. // ✅ Checks all boxes
A solid scheme to build you errors message around could be:
[Function/Action] Failed to <goal> because <specific_reason>.
5. Proper monitoring
Visibility is essential for maintaining healthy functions.
5.1 Logging
Effective logging is your primary tool for debugging.
- Structured Logging: Prefer logging JSON objects over plain strings. This makes it significantly easier to filter and search logs in the dashboard.
- Constraints: You are limited to 10 log entries and a total of 16KB of log data per invocation. Exceeding this will result in lost logs.
- Security: Never log Personally Identifiable Information (PII) or Secrets (API keys, passwords).
- Reference: See Logging Feature for full details.
// ✅ Good
console.info("Processing order", { orderId: 123, status: "pending" });
5.2 Alerting
Don't wait for a customer to report an issue. Proactively monitor your functions.
- Action: Use the Functions UI to set up Activity Stream alerts.
- Triggers: Configure alerts for spikes in Error Rates or High Latency. This allows you to react immediately when a function starts behaving abnormally.