The CASE software utilizes a configuration file to manage various settings that influence its behavior and functionality. Below is a breakdown of each setting available in the configuration file and its purpose. This sub-command updates a specific setting to the given value within the CASE configuration file. It is used to dynamically alter the application's behavior without manually editing the configuration file. Example: Value: logConfig.dat This setting specifies where the encrypted password accessing the Settings.conf file is stored and what the filename of it is. This file will be stored in the same directory as the CASE.exe application. Value: (empty) This setting allows you to specify the GPU to be used by the application for AI-related tasks. If left empty, the system's default GPU will be used if available, this is automatically scannerd by the python library, so should only be updated by advanced users. Value: {MODELNAME}.gguf This setting specifies the AI model used by the application. Value: C:\Path\To\CASE\Directory\ This setting defines the base directory path where the CASE software and related files are stored. It serves as the root directory for other paths within the application. Value: C:\Path\To\Temporary\Image\Locations\ This setting specifies the path where temporary screen captures are stored during the application's runtime and RPA scanning of the system, these files are immediately deleted during the process. Value: FlatFile This setting determines the method used for storing data. In this case, 'FlatFile' indicates that data will be stored in a simple file-based format rather than a database. Value: C:\Path\To\CASE\Directory\ This setting defines the path where flat files are stored, used by the application to save and retrieve data. Value: C:\Path\To\CASE\Directory\find\ This setting specifies the directory where images are stored for use in RPA (Robotic Process Automation) tasks and other visual data processing. Value: C:\Path\To\GPT4All\Host\*.gguf Files\ This setting indicates the directory where AI models are stored, particularly for use with GPT4All and other AI-related operations. Value: True/False This setting controls whether the ASCII art are displayed at the top of the CASE application on load. Value: \Conversations\:[Username],[Message],[Screen],[DT],[Receiver],[Submitted],[WaitingOnResponse],[Answered],[ActionPerformed],[TimeToProcess],[CommandsActivated],[id] This setting defines the structure for interfacing with conversations within the application, detailing the fields used for tracking conversation metadata. Value: \RPA\:[Command],[codeToExecute],[Software],[LastUsed],[Dynamic] This setting specifies the structure for storing RPA commands, including details such as the command, code to execute, associated software, last used date, and whether the command is dynamic. Value: \Logs\:[sessionid],[logmessage] This setting defines the structure for storing log data, including session IDs and log messages generated during the application's operation. Value: \Monitoring_Queue\:[Datapoint],[startTime],[endTime],[Parameter],[AlarmValue],[RPA] This setting specifies the structure for monitoring queues, used to track data points, timing, parameters, alarm values, and associated RPA commands. Value: \CaseSessions\:[session],[startDT],[endDT],[username] This setting defines the structure for storing session information, including session IDs, start and end times, and the username associated with each session. Value:Server=LOCALHOST\SQLExpress;Database=Case;Integrated Security=True; This setting allows you to specify the connection string for SQL database connections, enabling database interactions within the application. Value: C:\Path\To\Templating\Python\Files\initCaseTemplate.py This setting specifies the path to the conversational training model template used to train the CASE software's conversational AI capabilities. Value: C:\Path\To\Templating\Python\Files\initCase.Conversational.txt This setting defines the path to the data file used in conjunction with the conversational training model for training the AI's conversational abilities. Value: C:\Path\To\Templating\Python\Files\initAnalysisTemplate.py This setting specifies the path to the analysis training model template used to train the CASE software's data analysis capabilities. Value: C:\Path\To\Templating\Python\Files\initCase.Analysis.Empty.txt This setting defines the path to the data file used in conjunction with the analysis training model for training the AI's analysis capabilities. Value: 2048 This setting specifies the number of tokens received by the AI model during processing, influencing the input size the model can handle. Value: 100 This setting defines the number of GPU layers utilized by the AI model, impacting the depth and performance of AI computations on compatible hardware. Value: 4096 This setting specifies the maximum number of tokens that can be sent by the AI model during a single operation, influencing the output size and complexity of responses. Value: 0.7 This setting controls the creativity of the AI model's responses, with higher values producing more varied outputs. The default value is set to 0.7. Value: 40 This setting determines the number of highest probability vocabulary tokens considered by the AI model when generating text. A higher value results in more diverse responses, while a lower value focuses on the most probable choices. Value: 0.4 This setting, also known as "nucleus sampling," controls the cumulative probability threshold for token selection. The model considers tokens with a cumulative probability up to the specified value, ensuring a balance between diversity and coherence in the generated text. Value: 0.0 This setting specifies the minimum cumulative probability for token selection. A value of 0.0 means no minimum probability threshold is applied, allowing for the full range of potential outputs. Value: 1.18 This setting applies a penalty to repeated tokens, reducing the likelihood of generating repetitive text. A higher penalty encourages more diverse and varied outputs. Value: 64 This setting determines the number of recent tokens to consider when applying the repetition penalty. It helps the model avoid repeating phrases or words excessively within the specified range. Value: 128 This setting defines the batch size of tokens processed at a time during AI operations, affecting the speed and efficiency of text generation. Value: True/False This setting enables or disables the display of model load times in the application, providing insights into the performance of AI operations. Value: True/False This setting simulates typing by gradually displaying text output, enhancing the user experience during interactions with the AI model. Value: True/False This is a boolean configured by the application after checking for the Settings.conf file. If it does not find a stable and properly configured Settings.conf file, it will set this value to 'False'. Value: True/False This setting determines whether to bypass the use of the Large Language Model (LLM) in certain operations. Setting this to 'False' ensures that the LLM is utilized as intended.Introduction
Update Settings Command
-settings -update SettingName:True
SFile
GPU
Model
BaseDirectoryPath
TemporaryScreenCapturePath
StorageMethod
FlatFilePath
ImageStorageDirectory
ModelRepository
DisplayAscii
InterfacingStructure
RPAStorageStructure
LoggingStructure
MonitoringStructure
SessionsStructure
SQLConnectionString
ConversationalTrainingModel
ConversationalTrainingModelData
AnalysisTrainingModel
AnalysisTrainingModelData
AI.ReceivedTokens
AI.GPULayers
AI.MaxTokensSent
AI.Temperature
AI.Top_K
AI.Top_P
AI.Min_P
AI.Penalty
AI.Repeat_Last_N
AI.ConsumptionBatch
AI.ShowLoadTimes
AI.SimulateTyping
CaseStable
BypassLLM