Logging module#
MLOps has a log parser that clean logs and tries to find useful information. This is the module to access this functionalities. Since there are a lot of logger types is hard to create the perfect parser, so using this one helps your model script being parsed by MLOps and sending cleaner messages.
Logger#
- class mlops_codex.logging.Logger(model_type: str)[source]#
Bases:
object
Class to custom logger for model scripts.
- model_type#
Attribute that designates the type of the model being executed. Can be ‘Sync’ or ‘Async’
- Type:
str
- Raises:
InputError – Invalid input for the logging functions
Example
The logger needs to be implemented inside the function being executed by MLOps like this:
from joblib import load import pandas as pd from mlops_codex.logging import Logger def score(data_path, model_path): logger = Logger('Async') logger.debug("USING LOGGER") model = load(model_path+"/model.pkl") df = pd.read_csv(data_path+'/input.csv') if len(df) < 5: logger.warning("DF is less than 5 lines") df['score'] = 1000 * (1-model.predict_proba(df)[:,1]) output = data_path+'/output.csv' df.to_csv(output, index=False) return output
- callback(output: str | int | float | list | dict) str [source]#
Compile the logs with the response for Sync models only. Should be the return of function being executed. This output should be able to be parsed as a JSON, so if you are using a non-primitive type as your return, make sure it can be parsed by json.dumps.
Example
def score(data, base_path): logger = Logger('Sync') logger.debug("USING LOGGER") model = load(base_path+"/model.pkl") df = pd.DataFrame(data=json.loads(data), index=[0]) return logger.callback({"score": 1000 * (1-float(model.predict_proba(df)[0,1]))})
- Parameters:
output (str) – Output of the function being executed.
- debug(message: str) None [source]#
Logs a DEBUG message.
- Parameters:
message (str) – Message that will be logged