New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[REQUEST] Metrics direct logging? #1096
Comments
Potential solutionThe user's request is to log metrics directly to the same output as the main Soketi process logs, without setting up Prometheus. To achieve this, we will implement a new method in the The reasoning for this solution is to provide a seamless integration of metrics logging into the existing logging system of Soketi. By using a consistent log level and a distinguishable prefix for metrics logs, we can ensure that the metrics data is easily identifiable and can be parsed by the user's sophisticated log drain/query setup. CodeFor the export class Log {
// ... existing methods ...
static metrics(message: any): void {
// Prefix the message with a specific identifier for metrics logs
const metricsMessage = `[METRICS] ${this.prefixWithTime(message)}`;
// Log the metrics message at an 'info' level or another appropriate level
this.log(metricsMessage, 'cyan', 'mx-2');
}
// ... existing methods ...
} For the // ... (rest of the imports and Server class)
export class Server {
// ... (existing properties and methods)
private metricsLogInterval: NodeJS.Timer | null = null; // Add a property to hold the interval
// ... (rest of the constructor and methods)
async start(callback?: CallableFunction) {
// ... (existing start logic)
// After the server has started, set up the metrics logging if enabled
if (this.options.metrics.enabled && this.metricsManager) {
// Set up an interval to log metrics periodically
this.metricsLogInterval = setInterval(() => {
this.metricsManager.logMetrics(); // This method should be implemented in the Metrics class
}, 10000); // Log every 10 seconds, for example
}
// ... (rest of the start logic)
}
// ... (rest of the methods)
stop(): Promise<void> {
// ... (existing stop logic)
// Clear the metrics logging interval when stopping the server
if (this.metricsLogInterval) {
clearInterval(this.metricsLogInterval);
this.metricsLogInterval = null;
}
// ... (rest of the stop logic)
}
// ... (rest of the Server class)
}
// ... (rest of the file) For the import * as prom from 'prom-client';
// ... other imports ...
export class Metrics implements MetricsInterface {
// ... existing code ...
/**
* Log the metrics data to the standard output or a log file.
*/
logMetrics(): void {
if (!this.server.options.metrics.enabled) {
return;
}
// Retrieve metrics as JSON or another suitable format
this.getMetricsAsJson().then(metrics => {
if (metrics) {
// Format the metrics data into a log-friendly string
const metricsLog = this.formatMetricsForLogging(metrics);
// Use the Log class to output the formatted metrics
Log.metrics(metricsLog);
}
}).catch(error => {
Log.error('Error retrieving metrics for logging: ' + error.message);
});
}
/**
* Format the metrics data for logging.
*/
private formatMetricsForLogging(metrics: prom.metric[]): string {
// Implement the logic to convert the metrics array into a string
// that can be easily read and parsed from log files.
// This could be a JSON string, or a custom format that suits the user's log drain/query setup.
return JSON.stringify(metrics, null, 2);
}
// ... existing code ...
} This implementation ensures that metrics are logged in a structured and consistent manner, making it easy for the user to integrate with their existing log analysis tools and build dashboards as needed. Click here to create a Pull Request with the proposed solution Files used for this task: Changes on src/log.tsTo address the user's request for logging metrics directly, we can add a new method to the Here's a practical proposal for implementing the new method in the
Here's an example of how the new method could be implemented in the export class Log {
// ... existing methods ...
static metrics(message: any): void {
// Prefix the message with a specific identifier for metrics logs
const metricsMessage = `[METRICS] ${this.prefixWithTime(message)}`;
// Log the metrics message at an 'info' level or another appropriate level
this.log(metricsMessage, 'cyan', 'mx-2');
}
// ... existing methods ...
} With this new method in place, the user can call This implementation assumes that the metrics data is already collected and formatted appropriately before being passed to the Changes on src/server.tsTo address the user's request for logging metrics directly without using Prometheus, we need to modify the Here's a practical proposal to implement this feature:
Here's a code snippet to illustrate the changes to the // ... (rest of the imports and Server class)
export class Server {
// ... (existing properties and methods)
private metricsLogInterval: NodeJS.Timer | null = null; // Add a property to hold the interval
// ... (rest of the constructor and methods)
async start(callback?: CallableFunction) {
// ... (existing start logic)
// After the server has started, set up the metrics logging if enabled
if (this.options.metrics.enabled && this.metricsManager) {
// Set up an interval to log metrics periodically
this.metricsLogInterval = setInterval(() => {
this.metricsManager.logMetrics(); // This method should be implemented in the Metrics class
}, 10000); // Log every 10 seconds, for example
}
// ... (rest of the start logic)
}
// ... (rest of the methods)
stop(): Promise<void> {
// ... (existing stop logic)
// Clear the metrics logging interval when stopping the server
if (this.metricsLogInterval) {
clearInterval(this.metricsLogInterval);
this.metricsLogInterval = null;
}
// ... (rest of the stop logic)
}
// ... (rest of the Server class)
}
// ... (rest of the file) This code snippet demonstrates how to set up and clear a periodic logging function for metrics within the Changes on src/metrics/metrics.tsTo address the user's request for logging metrics directly without using Prometheus, we can implement a new method within the Here's a practical proposal for implementing the import * as prom from 'prom-client';
// ... other imports ...
export class Metrics implements MetricsInterface {
// ... existing code ...
/**
* Log the metrics data to the standard output or a log file.
*/
logMetrics(): void {
if (!this.server.options.metrics.enabled) {
return;
}
// Retrieve metrics as JSON or another suitable format
this.getMetricsAsJson().then(metrics => {
if (metrics) {
// Format the metrics data into a log-friendly string
const metricsLog = this.formatMetricsForLogging(metrics);
// Use the Log class to output the formatted metrics
Log.info(metricsLog);
}
}).catch(error => {
Log.error('Error retrieving metrics for logging: ' + error.message);
});
}
/**
* Format the metrics data for logging.
*/
private formatMetricsForLogging(metrics: prom.metric[]): string {
// Implement the logic to convert the metrics array into a string
// that can be easily read and parsed from log files.
// This could be a JSON string, or a custom format that suits the user's log drain/query setup.
return JSON.stringify(metrics, null, 2);
}
// ... existing code ...
} In this implementation, the The To ensure that this method is called at appropriate intervals, you would need to set up a periodic logging function in the Remember to handle any potential exceptions that may occur during the retrieval or logging of metrics to ensure the stability of the Soketi process. Disclaimer: This comment was entirely generated using AI. Be aware that the information provided may be incorrect. Current plan usage: 36.05% Have feedback or need help? |
Hi all, I've recently deployed Soketi to prod to replace our previous Pusher implementation, and it's been working great.
We're not currently using Prometheus and I'm not terribly interested in setting it up just for the sake of basic monitoring of Soketi.
Is it possible to log the metrics content to the same log thread/output as the main Soketi process? Our log drain/query setup is quite sophisticated and we'd be able to build dashboards based on that output directly.
Thanks!
The text was updated successfully, but these errors were encountered: