COM_GETBIBLE_ALL_IS_GOOD_PLEASE_CHECK_AGAIN_LATTER="All is good, please check again latter."
COM_GETBIBLE_ALL_TRANSLATIONS="All Translations"
COM_GETBIBLE_ARCHIVED="Archived"
COM_GETBIBLE_ARE_YOU_SURE_YOU_WANT_TO_DELETE_CONFIRMING_WILL_PERMANENTLY_DELETE_THE_SELECTED_ITEMS="Are you sure you want to delete? Confirming will permanently delete the selected item(s)!"
COM_GETBIBLE_AUTHOR="Author"
COM_GETBIBLE_BACK="Back"
COM_GETBIBLE_BETA_RELEASE="Beta Release"
COM_GETBIBLE_BOOK="Book"
COM_GETBIBLE_BOOKS="Books"
COM_GETBIBLE_BOOKS_ACCESS="Books Access"
COM_GETBIBLE_BOOKS_ACCESS_DESC="Allows the users in this group to access access books"
COM_GETBIBLE_BOOKS_BATCH_OPTIONS="Batch process the selected Books"
COM_GETBIBLE_BOOKS_BATCH_TIP="All changes will be applied to all selected Books"
COM_GETBIBLE_BOOKS_BATCH_USE="Books Batch Use"
COM_GETBIBLE_BOOKS_BATCH_USE_DESC="Allows users in this group to use batch copy/update method of batch books"
COM_GETBIBLE_BOOKS_CREATE="Books Create"
COM_GETBIBLE_BOOKS_CREATE_DESC="Allows the users in this group to create create books"
COM_GETBIBLE_CONFIG_ACTIVE_SHARING_NOTE_DESCRIPTION="We cache the user selection, which means that these settings only form the default advance options for new visitors, or for those who clear their cache."
COM_GETBIBLE_CONFIG_LOCAL_LINK_SHARE_LABEL="Local Link Share"
COM_GETBIBLE_CONFIG_MARKDOWN="Markdown"
COM_GETBIBLE_CONFIG_NAME="Name"
COM_GETBIBLE_CONFIG_NAME_DESCRIPTION="Name"
COM_GETBIBLE_CONFIG_NAME_HINT="Tab Name"
COM_GETBIBLE_CONFIG_NAME_LABEL="Tab"
COM_GETBIBLE_CONFIG_NAME_MESSAGE="Error! Please add some name here."
COM_GETBIBLE_CONFIG_NO="No"
COM_GETBIBLE_CONFIG_NONE="None"
COM_GETBIBLE_CONFIG_NOTES="Notes"
COM_GETBIBLE_CONFIG_ONLY_EXTRA="Only Extra"
COM_GETBIBLE_CONFIG_OPENAI_DOCUMENTATION_NOTE_DESCRIPTION="<p>Please review the OpenAI API documentation for creating a chat conversation at <a href='https://platform.openai.com/docs/api-reference/chat/create'>this link</a>. The document provides a comprehensive guide on parameters and methods to create chat completion using OpenAI's model. It includes instructions on:</p><ul> <li>How to post a request to create model responses</li> <li>The format for the request body including role, model, messages, and optional parameters such as name, content, and function_call</li> <li>Different ways to control the model's response such as temperature and top_p</li> <li>How to control the number of generated chat completion choices, the stop sequences, and maximum number of tokens</li> <li>Utilizing penalties and biases for managing the output</li> <li>Additional features like streaming and user tracking for abuse monitoring</li></ul>"
COM_GETBIBLE_CONFIG_OPENAI_FREQUENCY_PENALTY_NOTE_DESCRIPTION="<p>The "frequency_penalty" is another optional parameter that defaults to 0. This is also a value between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.</p><ul> <li>For example, a high frequency penalty discourages the model from excessively repeating the same words or phrases, encouraging it to produce more diverse and creative text.</li></ul>"
COM_GETBIBLE_CONFIG_OPENAI_MAX_TOKENS_NOTE_DESCRIPTION="<p>The "max_tokens" parameter sets the maximum number of tokens to generate in the chat completion. This is an optional parameter that defaults to infinity if not specified.</p><ul> <li>Note that the total length of both the input tokens and the generated tokens is limited by the model's context length. For instance, if a model has a context length of 100 tokens and you've used 20 tokens in your input, you could generate up to 80 additional tokens.</li></ul>"
COM_GETBIBLE_CONFIG_OPENAI_MODEL_DESCRIPTION="ID of the model to use."
COM_GETBIBLE_CONFIG_OPENAI_MODEL_LABEL="Model"
COM_GETBIBLE_CONFIG_OPENAI_N_DESCRIPTION="The number of chat completion choices to generate"
COM_GETBIBLE_CONFIG_OPENAI_N_LABEL="Number AI Response Per/Prompt"
COM_GETBIBLE_CONFIG_OPENAI_N_NOTE_DESCRIPTION="<p>The "n" parameter determines how many independent completions to generate for each input message. This can be used when you want multiple distinct responses for a single prompt.</p><ul> <li>Setting "n" to 3, for instance, would make the model generate 3 separate responses for each input message.</li></ul>"
COM_GETBIBLE_CONFIG_OPENAI_N_NOTE_LABEL="Number AI Response Per/Prompt"
COM_GETBIBLE_CONFIG_OPENAI_ORG_TOKEN_NOTE_DESCRIPTION="<h1>How to Get an OpenAI Organization API Token</h1><ol> <li> <strong>Sign In to OpenAI:</strong> Visit <a href='https://www.openai.com'>OpenAI</a> and click on the 'Sign In' button to access your account. </li> <li> <strong>Access the Organization Settings:</strong> Navigate to the organization settings page. This is typically accessible from your dashboard or account settings. Look for a section or tab labeled 'Organization' or 'Organization Settings.' </li> <li> <strong>Create a New Organization API Key:</strong> From the organization settings page, find the section for API keys. There should be an option to create a new API key. Click on this and follow the prompts to create your new organization API key. </li> <li> <strong>Copy the API Key:</strong> Once created, your new organization API key will be displayed. Make sure to copy and store this key securely. Just like personal API keys, you won't be able to view this key again for security reasons, so it's essential to save it somewhere safe. </li> <li> <strong>Use the Organization API Key:</strong> You can now use this organization API key to make requests to the OpenAI API on behalf of your organization. Include this key in the headers of your HTTP requests, just like you would with a personal API key. </li></ol><p>Remember, organization API keys represent access to your entire organization's OpenAI account, so be especially careful with these. Don't share them publicly, and make sure only trusted individuals within your organization have access. If you believe an organization API key has been compromised, delete it and create a new one immediately.</p>"
COM_GETBIBLE_CONFIG_OPENAI_ORG_TOKEN_NOTE_LABEL="Get Open AI Organization Token"
COM_GETBIBLE_CONFIG_OPENAI_PRESENCE_PENALTY_DESCRIPTION="Penalty for new tokens based on whether they appear in the text"
COM_GETBIBLE_CONFIG_OPENAI_PRESENCE_PENALTY_NOTE_DESCRIPTION="<p>The "presence_penalty" is an optional parameter that defaults to 0. This is a value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.</p><ul> <li>For example, a high presence penalty encourages the model to generate text involving a wider variety of topics or themes, rather than focusing on a single topic or repeatedly using the same phrases.</li></ul>"
COM_GETBIBLE_CONFIG_OPENAI_TEMPERATURE_NOTE_DESCRIPTION="<p>The "temperature" is a parameter that controls the randomness of the model's output. Its value ranges between 0 and 2. A higher temperature value results in more randomness, while a lower value results in less randomness. This affects the selection of the next word during text generation.</p><ul> <li>At a higher value like 2, the model has a greater probability of picking less likely words, which may lead to more diverse and creative outputs.</li> <li>At a lower value like 0.2, the model's output becomes more deterministic, primarily choosing words with the highest predicted probabilities, leading to more focused and predictable responses.</li></ul>"
COM_GETBIBLE_CONFIG_OPENAI_TOKEN_NOTE_DESCRIPTION="<h1>How to Get an OpenAI API Token</h1><ol> <li> <strong>Sign Up on OpenAI:</strong> Visit <a href='https://www.openai.com'>OpenAI</a> and click on the 'Sign Up' button to create a new account. You'll need to provide some basic information like your name and email address. </li> <li> <strong>Confirm your Email:</strong> After signing up, you'll receive an email from OpenAI to verify your email address. Click on the verification link to confirm your account. </li> <li> <strong>Access the Dashboard:</strong> Once your email is verified, log in to your OpenAI account and navigate to the Dashboard. You should see an option to create a new API key. </li> <li> <strong>Create a New API Key:</strong> Click on the 'Create New API Key' button. You'll be asked to name your API key and specify whether it has read and/or write access. Be sure to select the appropriate access level based on your needs. </li> <li> <strong>Copy your API Key:</strong> After creating the key, it will be displayed on your screen. Make sure to copy it and store it securely. You won't be able to view this key again for security reasons, so it's essential that you save it somewhere safe. </li> <li> <strong>Use your API Key:</strong> Now that you have your API key, you can use it to make requests to the OpenAI API. Typically, you will include this key in the headers of your HTTP requests. </li></ol><p>Keep in mind that OpenAI's API has usage limits depending on your type of account (free, pay-as-you-go, etc.), so make sure you understand these limits to avoid unexpected charges or service interruptions. You can find information about usage limits on the OpenAI Pricing page.</p><p>Additionally, remember to keep your API key secure. Do not share it publicly or commit it to public repositories. If you believe your API key has been compromised, you can delete it and generate a new one from your OpenAI dashboard.</p>"
COM_GETBIBLE_CONFIG_OPENAI_TOKEN_NOTE_LABEL="Get Open AI Token"
COM_GETBIBLE_CONFIG_OPENAI_TOP_P_NOTE_DESCRIPTION="<p>The "top_p" parameter is used for "nucleus sampling," an alternative to the temperature-based sampling. It defines a threshold for the cumulative probability of the chosen tokens. Rather than considering all possible tokens for the next word, it only considers the smallest set of tokens whose cumulative probability exceeds the set "top_p" value.</p><ul> <li>Setting "top_p" to 0.1 means the model will only consider the tokens comprising the top 10% probability mass for the next word.</li> <li>If "top_p" is set to 0.9, the model considers a wider range of tokens for the next word but still limits to those within the top 90% of the probability distribution.</li></ul>"
COM_GETBIBLE_CONFIG_SHOW_INSTALL_BUTTON_DESCRIPTION="Show install translation button on the Bible app page. This is normally only needed during the setup of your application on your website the first time."
COM_GETBIBLE_CONFIG_UIKIT_DESC="<b>The Parameters for the uikit are set here.</b><br />Uikit is a lightweight and modular front-end frameworkfor developing fast and powerful web interfaces. For more info visit <a href='https://getuikit.com/' target='_blank'>https://getuikit.com/</a>"
COM_GETBIBLE_CONFIG_UIKIT_LABEL="Uikit3 Settings"
COM_GETBIBLE_CONFIG_UIKIT_LOAD_DESC="Set the uikit loading option."
COM_GETBIBLE_GET_TOKEN_FROM_VDM_TO_GET_UPDATE_NOTICE_AND_ADD_IT_TO_YOUR_GLOBAL_OPTIONS="Get token from VDM to get update notice, and add it to your global options."
COM_GETBIBLE_HELP_MANAGER="Help"
COM_GETBIBLE_HTWOCURL_NOT_FOUNDHTWOPPLEASE_SETUP_CURL_ON_YOUR_SYSTEM_OR_BGETBIBLEB_WILL_NOT_FUNCTION_CORRECTLYP="<h2>Curl Not Found!</h2><p>Please setup curl on your system, or <b>getbible</b> will not function correctly!</p>"
COM_GETBIBLE_OPEN_AI_MESSAGE_SAVE_WARNING="Alias already existed so a number was added at the end. You can re-edit the Open AI Message to customise the alias."
COM_GETBIBLE_OPEN_AI_MESSAGE_SOURCE_DESCRIPTION="Source of message"
COM_GETBIBLE_OPEN_AI_RESPONSES="Open AI Responses"
COM_GETBIBLE_OPEN_AI_RESPONSES_ACCESS="Open Ai Responses Access"
COM_GETBIBLE_OPEN_AI_RESPONSES_ACCESS_DESC="Allows the users in this group to access access open ai responses"
COM_GETBIBLE_OPEN_AI_RESPONSES_BATCH_OPTIONS="Batch process the selected Open AI Responses"
COM_GETBIBLE_OPEN_AI_RESPONSES_BATCH_TIP="All changes will be applied to all selected Open AI Responses"
COM_GETBIBLE_OPEN_AI_RESPONSES_BATCH_USE="Open Ai Responses Batch Use"
COM_GETBIBLE_OPEN_AI_RESPONSES_BATCH_USE_DESC="Allows users in this group to use batch copy/update method of batch open ai responses"
COM_GETBIBLE_OPEN_AI_RESPONSES_CREATE="Open Ai Responses Create"
COM_GETBIBLE_OPEN_AI_RESPONSES_CREATE_DESC="Allows the users in this group to create create open ai responses"
COM_GETBIBLE_OPEN_AI_RESPONSES_DASHBOARD_LIST="Open Ai Responses Dashboard List"
COM_GETBIBLE_OPEN_AI_RESPONSES_DASHBOARD_LIST_DESC="Allows the users in this group to dashboard list of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_DELETE="Open Ai Responses Delete"
COM_GETBIBLE_OPEN_AI_RESPONSES_DELETE_DESC="Allows the users in this group to delete delete open ai responses"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT="Open Ai Responses Edit"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_ABBREVIATION="Open Ai Responses Edit Abbreviation"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_ABBREVIATION_DESC="Allows the users in this group to edit abbreviation of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_BOOK="Open Ai Responses Edit Book"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_BOOK_DESC="Allows the users in this group to edit book of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_CHAPTER="Open Ai Responses Edit Chapter"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_CHAPTER_DESC="Allows the users in this group to edit chapter of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_COMPLETION_TOKENS="Open Ai Responses Edit Completion Tokens"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_COMPLETION_TOKENS_DESC="Allows the users in this group to edit completion tokens of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_CREATED_BY="Open Ai Responses Edit Created By"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_CREATED_BY_DESC="Allows the users in this group to update the created by of the edit created by open ai responses"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_CREATED_DATE="Open Ai Responses Edit Created Date"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_CREATED_DATE_DESC="Allows the users in this group to update the created date of the edit created open ai responses"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_DESC="Allows the users in this group to edit the open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_FREQUENCY_PENALTY="Open Ai Responses Edit Frequency Penalty"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_FREQUENCY_PENALTY_DESC="Allows the users in this group to edit frequency penalty of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_LANGUAGE="Open Ai Responses Edit Language"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_LANGUAGE_DESC="Allows the users in this group to edit language of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_LCSH="Open Ai Responses Edit Lcsh"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_LCSH_DESC="Allows the users in this group to edit lcsh of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_MAX_TOKENS="Open Ai Responses Edit Max Tokens"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_MAX_TOKENS_DESC="Allows the users in this group to edit max tokens of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_MODEL="Open Ai Responses Edit Model"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_MODEL_DESC="Allows the users in this group to edit model of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_N="Open Ai Responses Edit N"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_N_DESC="Allows the users in this group to edit n of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_OWN="Open Ai Responses Edit Own"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_OWN_DESC="Allows the users in this group to edit edit own open ai responses created by them"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_PRESENCE_PENALTY="Open Ai Responses Edit Presence Penalty"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_PRESENCE_PENALTY_DESC="Allows the users in this group to edit presence penalty of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_PROMPT="Open Ai Responses Edit Prompt"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_PROMPT_DESC="Allows the users in this group to edit prompt of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_PROMPT_TOKENS="Open Ai Responses Edit Prompt Tokens"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_PROMPT_TOKENS_DESC="Allows the users in this group to edit prompt tokens of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_RESPONSE_CREATED="Open Ai Responses Edit Response Created"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_RESPONSE_CREATED_DESC="Allows the users in this group to edit response created of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_RESPONSE_ID="Open Ai Responses Edit Response Id"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_RESPONSE_ID_DESC="Allows the users in this group to edit response id of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_RESPONSE_MODEL="Open Ai Responses Edit Response Model"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_RESPONSE_MODEL_DESC="Allows the users in this group to edit response model of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_RESPONSE_OBJECT="Open Ai Responses Edit Response Object"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_RESPONSE_OBJECT_DESC="Allows the users in this group to edit response object of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_SELECTED_WORD="Open Ai Responses Edit Selected Word"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_SELECTED_WORD_DESC="Allows the users in this group to edit selected word of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_STATE="Open Ai Responses Edit State"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_STATE_DESC="Allows the users in this group to update the state of the open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_TEMPERATURE="Open Ai Responses Edit Temperature"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_TEMPERATURE_DESC="Allows the users in this group to edit temperature of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_TOP_P="Open Ai Responses Edit Top P"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_TOP_P_DESC="Allows the users in this group to edit top p of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_TOTAL_TOKENS="Open Ai Responses Edit Total Tokens"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_TOTAL_TOKENS_DESC="Allows the users in this group to edit total tokens of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_VERSE="Open Ai Responses Edit Verse"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_VERSE_DESC="Allows the users in this group to edit verse of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_VERSION="Open Ai Responses Edit Version"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_VERSION_DESC="Allows users in this group to edit versions of version open ai responses"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_WORD="Open Ai Responses Edit Word"
COM_GETBIBLE_OPEN_AI_RESPONSES_EDIT_WORD_DESC="Allows the users in this group to edit word of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSES_N_ITEMS_ARCHIVED="%s Open AI Responses archived."
COM_GETBIBLE_OPEN_AI_RESPONSES_N_ITEMS_ARCHIVED_1="%s Open AI Response archived."
COM_GETBIBLE_OPEN_AI_RESPONSES_N_ITEMS_CHECKED_IN_0="No Open AI Response successfully checked in."
COM_GETBIBLE_OPEN_AI_RESPONSES_N_ITEMS_CHECKED_IN_1="%d Open AI Response successfully checked in."
COM_GETBIBLE_OPEN_AI_RESPONSES_N_ITEMS_CHECKED_IN_MORE="%d Open AI Responses successfully checked in."
COM_GETBIBLE_OPEN_AI_RESPONSES_N_ITEMS_DELETED="%s Open AI Responses deleted."
COM_GETBIBLE_OPEN_AI_RESPONSES_N_ITEMS_DELETED_1="%s Open AI Response deleted."
COM_GETBIBLE_OPEN_AI_RESPONSES_N_ITEMS_FAILED_PUBLISHING="%s Open AI Responses failed publishing."
COM_GETBIBLE_OPEN_AI_RESPONSES_N_ITEMS_FAILED_PUBLISHING_1="%s Open AI Response failed publishing."
COM_GETBIBLE_OPEN_AI_RESPONSES_N_ITEMS_FEATURED="%s Open AI Responses featured."
COM_GETBIBLE_OPEN_AI_RESPONSES_N_ITEMS_FEATURED_1="%s Open AI Response featured."
COM_GETBIBLE_OPEN_AI_RESPONSES_N_ITEMS_PUBLISHED="%s Open AI Responses published."
COM_GETBIBLE_OPEN_AI_RESPONSES_N_ITEMS_PUBLISHED_1="%s Open AI Response published."
COM_GETBIBLE_OPEN_AI_RESPONSES_N_ITEMS_TRASHED="%s Open AI Responses trashed."
COM_GETBIBLE_OPEN_AI_RESPONSES_N_ITEMS_TRASHED_1="%s Open AI Response trashed."
COM_GETBIBLE_OPEN_AI_RESPONSES_N_ITEMS_UNFEATURED="%s Open AI Responses unfeatured."
COM_GETBIBLE_OPEN_AI_RESPONSES_N_ITEMS_UNFEATURED_1="%s Open AI Response unfeatured."
COM_GETBIBLE_OPEN_AI_RESPONSES_N_ITEMS_UNPUBLISHED="%s Open AI Responses unpublished."
COM_GETBIBLE_OPEN_AI_RESPONSES_N_ITEMS_UNPUBLISHED_1="%s Open AI Response unpublished."
COM_GETBIBLE_OPEN_AI_RESPONSES_SUBMENU="Open Ai Responses Submenu"
COM_GETBIBLE_OPEN_AI_RESPONSES_SUBMENU_DESC="Allows the users in this group to submenu of open ai response"
COM_GETBIBLE_OPEN_AI_RESPONSE_ABBREVIATION_DESCRIPTION="Select a Bible translation"
COM_GETBIBLE_OPEN_AI_RESPONSE_RESPONSE_OBJECT_MESSAGE="Error! Please add some text here."
COM_GETBIBLE_OPEN_AI_RESPONSE_SAVE_WARNING="Alias already existed so a number was added at the end. You can re-edit the Open AI Response to customise the alias."
COM_GETBIBLE_PROMPT_CACHE_ADVANCE_NOTE_DESCRIPTION="<p>Recommended: Cache responses related to specific verses from a particular book, chapter, and translation. Provides more context-specific accuracy, but potentially more costly.</p><p>The "Advanced Caching - Verse/Context" strategy is our recommended choice for queries that focus on the interpretation of specific verses from a particular book, chapter, and translation of the Bible. Here's a breakdown of its operation and considerations:</p><ul><li><strong>Function:</strong> When a query is made, the system checks the cache for prior responses tied to the same verse within the same book, chapter, and translation. If a match is discovered, the cached response is delivered, improving speed while also managing costs.</li><li><strong>Pros:</strong> This strategy offers improved context-specific accuracy by considering the specific verse, chapter, and book where the word or phrase is located. This results in more precise interpretations and a superior user experience.</li><li><strong>Cons:</strong> As a trade-off for its higher accuracy, this method may be more costly than the "Basic Caching - Words/Language" strategy due to the necessity of caching a wider range of unique queries.</li><li><strong>Recommendation:</strong> Despite the potential increased cost, we strongly advocate for this method due to its emphasis on context and accuracy.</li></ul>"
COM_GETBIBLE_PROMPT_CACHE_ADVANCE_NOTE_LABEL="Discover the Advanced Caching Strategy"
COM_GETBIBLE_PROMPT_CACHE_BASIC_NOTE_DESCRIPTION="<p>Cache responses based on specific words or phrases within a particular language.</p><p>The "Basic Caching - Words/Language" strategy is designed to handle queries that focus on the meaning or interpretation of specific words or phrases within a given language. Here's how it works and what to consider:</p><ul><li><strong>Function:</strong> When a query is initiated, the system searches the cache to determine if the same word or phrase has been previously requested in the same language. If a match is found, the cached response is given, enhancing speed and managing costs.</li><li><strong>Pros:</strong> This method can be more cost-effective and faster as it avoids repeated API calls for identical queries.</li><li><strong>Cons:</strong> The major drawback of this approach is its potential for less accurate interpretations. It doesn't take into account the specific context provided by the verse, chapter, and book of the Bible where the word or phrase is found.</li><li><strong>Recommendation:</strong> While this method can be efficient, for more contextually accurate responses, consider the "Advanced Caching - Verse/Context" strategy.</li></ul>"
COM_GETBIBLE_PROMPT_CACHE_BASIC_NOTE_LABEL="Understand the Basic Caching Strategy"
COM_GETBIBLE_PROMPT_CACHE_BEHAVIOUR_DESCRIPTION="Determine the caching behaviour of this prompt. Be aware, this is a crucial setting that significantly impacts the prompt's operation."
COM_GETBIBLE_PROMPT_CACHE_CAPACITY_DESCRIPTION="Determine the maximum number of unique responses to be stored in the cache for each word (in the 'Basic Caching - Words/Language' strategy) or each verse (in the 'Advanced Caching - Verse/Context' strategy), before additional calls to OpenAI's API are halted. This setting helps manage your usage of OpenAI's services by setting a limit on the variety of cached responses."
COM_GETBIBLE_PROMPT_CACHE_CAPACITY_NOTE_DESCRIPTION="<p>The 'Cache Capacity' feature is an essential tool designed to help you manage the usage of OpenAI's API services more efficiently and cost-effectively. It functions by setting an upper limit on the number of unique responses that the system stores in the cache before it stops making additional calls to OpenAI's API.</p><p>In the 'Basic Caching - Words/Language' strategy, a 'unique response' refers to the cached answer to a query about a specific word or phrase in a particular language. Once the number of cached responses for each unique word or phrase reaches the defined 'Cache Capacity', the system will not initiate new API calls for that word or phrase. Instead, it will deliver already cached responses, either at random or in total, depending on your 'Response Retrieval' settings.</p><p>Similarly, in the 'Advanced Caching - Verse/Context' strategy, a 'unique response' pertains to the cached answer to a query about a specific verse from a particular book, chapter, and translation of the Bible. When the cache capacity for each unique verse is reached, the system will refrain from making new API calls for that verse, and deliver the responses from the cache, again following your 'Response Retrieval' preferences.</p><p>By using the 'Cache Capacity' feature, you gain control over the diversity of responses that are cached and reduce the potential costs of repeated API calls. Please note that the 'Cache Capacity' applies separately to each word or phrase in the basic strategy, and to each verse in the advanced strategy, allowing for fine-tuned control over the caching process.</p><p>However, keep in mind that setting the 'Cache Capacity' too low might limit the variety of responses, while setting it too high may lead to increased caching costs. Therefore, it's crucial to find a balance that suits your specific needs and the nature of your application.</p>"
COM_GETBIBLE_PROMPT_CACHE_PERSISTENTLY_EXPANSIVE_CACHING_NOTE_DESCRIPTION="<h2>Persistently Expansive Caching Strategy</h2><p>Intended for use in testing or by experienced users, this strategy always caches responses but does not use the cache to respond to subsequent queries.</p><p>Our "Persistently Expansive Caching" strategy represents a significant shift in our caching paradigms. Contrary to 'none,' where caching was non-existent, and unlike 'basic' or 'advanced,' where previous responses are used to inform future ones, this strategy stores every interaction but doesn't utilize this cache to respond to subsequent similar queries. Here are some crucial aspects to bear in mind:</p><ul> <li><strong>Function:</strong> Every query initiated will invoke an API call to OpenAI, and its response will be cached. However, unlike our other strategies, the system will not use this cache to respond to future similar queries. This strategy ensures a fresh interaction with OpenAI for each query, providing unique responses each time.</li> <li><strong>Pros:</strong> You will have a comprehensive record of all interactions with OpenAI, helpful for deep-dive analyses, or rigorous testing scenarios. This method ensures the most recent and contextually accurate responses from OpenAI at all times.</li> <li><strong>Cons:</strong> This method is the most resource-intensive and expansive of our caching strategies. Each query incurs a new cost, regardless of whether the same question has been asked before. It can rapidly escalate expenses if not managed prudently.</li> <li><strong>Warning:</strong> We still recommend using this strategy sparingly and mostly for testing purposes due to its extensive resource and cost implications. It should be employed by experienced users who fully understand its expansive nature and have strategies to manage the associated costs effectively.</li></ul><p>While 'Persistently Expansive Caching' offers the advantage of fresh interactions and comprehensive caching, remember the virtues of 'Basic' and 'Advanced' caching strategies. These strategies balance cost, speed, and accuracy by smartly utilizing cache to respond to repeat queries, thereby optimizing your OpenAI interactions. Your choice should align with your unique requirements, your cost management strategies, and your desired balance between speed, cost, and accuracy.</p>"
COM_GETBIBLE_PROMPT_FREQUENCY_PENALTY_NOTE_DESCRIPTION="<p>The "frequency_penalty" is another optional parameter that defaults to 0. This is also a value between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.</p><ul> <li>For example, a high frequency penalty discourages the model from excessively repeating the same words or phrases, encouraging it to produce more diverse and creative text.</li></ul>"
COM_GETBIBLE_PROMPT_INTEGRATION_NOTE_DESCRIPTION="<p>The 'Prompt Integration Scope' feature is instrumental in defining the scope of the Bible text that your prompt can integrate with when querying the OpenAI API. Depending on your selection, this directly impacts how the application interacts with OpenAI's services and how responses are cached and presented to the user.</p><ul><li><strong>Word-Based:</strong> If you select this option, your prompt will be available whenever a single word is selected within the application. This strategy is more granular and allows for highly targeted queries based on individual words. However, this option can lead to a higher number of API calls if the selected word varies frequently. Note that the caching behavior in this case will be based on the selected word(s).</li><li><strong>Verse-Based:</strong> Opting for this will make your prompt available for entire verses. It does not target individual words within a verse but treats the verse as a whole. This approach results in fewer API calls as long as the verse remains the same. Note that caching in this mode will be linked to the specific verse, not to individual words in the verse. This option only utilizes the placeholder <b>[selected_verse]</b>.</li><li><strong>Selection-Based:</strong> With this selection, your prompt becomes available when one or more words across one or more verses are selected. This gives maximum flexibility but can also lead to increased API calls, especially if the selections change frequently. The caching strategy in this case will be tied to the specific word(s) and verse(s) selected.</li></ul><p>Choosing the right 'Prompt Integration Scope' is vital as it directly affects the positioning and availability of the prompt in the GUI, the interaction with the OpenAI API, and the caching strategy and behavior. Be sure to select the option that best suits the nature of your queries and the needs of your users.</p>"
COM_GETBIBLE_PROMPT_MAX_TOKENS_DESCRIPTION="Maximum number of tokens to generate."
COM_GETBIBLE_PROMPT_MAX_TOKENS_LABEL="Max Tokens"
COM_GETBIBLE_PROMPT_MAX_TOKENS_NOTE_DESCRIPTION="<p>The "max_tokens" parameter sets the maximum number of tokens to generate in the chat completion. This is an optional parameter that defaults to infinity if not specified.</p><ul> <li>Note that the total length of both the input tokens and the generated tokens is limited by the model's context length. For instance, if a model has a context length of 100 tokens and you've used 20 tokens in your input, you could generate up to 80 additional tokens.</li></ul>"
COM_GETBIBLE_PROMPT_NAME_MESSAGE="Error! Please add name here."
COM_GETBIBLE_PROMPT_NEW="A New Prompt"
COM_GETBIBLE_PROMPT_N_DESCRIPTION="The number of chat completion choices to generate"
COM_GETBIBLE_PROMPT_N_LABEL="Number AI Response Per/Prompt"
COM_GETBIBLE_PROMPT_N_NOTE_DESCRIPTION="<p>The "n" parameter determines how many independent completions to generate for each input message. This can be used when you want multiple distinct responses for a single prompt.</p><ul> <li>Setting "n" to 3, for instance, would make the model generate 3 separate responses for each input message.</li></ul>"
COM_GETBIBLE_PROMPT_N_NOTE_LABEL="Number AI Response Per/Prompt"
COM_GETBIBLE_PROMPT_N_OVERRIDE_DESCRIPTION="Would you like to override the global n value."
COM_GETBIBLE_PROMPT_N_OVERRIDE_LABEL="Number AI Response Per/Prompt"
COM_GETBIBLE_PROMPT_OPENAI_DOCUMENTATION_NOTE_DESCRIPTION="<p>Please review the OpenAI API documentation for creating a chat conversation at <a href='https://platform.openai.com/docs/api-reference/chat/create'>this link</a>. The document provides a comprehensive guide on parameters and methods to create chat completion using OpenAI's model. It includes instructions on:</p><ul> <li>How to post a request to create model responses</li> <li>The format for the request body including role, model, messages, and optional parameters such as name, content, and function_call</li> <li>Different ways to control the model's response such as temperature and top_p</li> <li>How to control the number of generated chat completion choices, the stop sequences, and maximum number of tokens</li> <li>Utilizing penalties and biases for managing the output</li> <li>Additional features like streaming and user tracking for abuse monitoring</li></ul>"
COM_GETBIBLE_PROMPT_OPENAI_PROMPTS_PLACEHOLDERS_ADVANCED_CACHING_NOTE_DESCRIPTION="<p>You can use the following placeholders in the prompts:</p><div><code style='display: inline-block; padding: 2px; margin: 3px;'>[translation_name]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[translation_language]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[translation_lcsh]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[translation_abbreviation]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[book_name]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[chapter_number]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[chapter_name]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[chapter_text]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[verse_number]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[verse_name]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[verse_text]</code><code class='selected-word-placeholder' style='display: inline-block; padding: 2px; margin: 3px;'>[selected_word_number]</code><code class='selected-word-placeholder' style='display: inline-block; padding: 2px; margin: 3px;'>[selected_word_text]</code></div><small>Utilizing these placeholders is crucial to enhancing the distinctiveness of responses from Open AI. Be aware that using the <b>[chapter_text]</b> placeholder loads the complete text of the chapter. This significantly increases the size of the query and might cause failures in certain circumstances. Exercise caution and restraint when implementing the <b>[chapter_text]</b> placeholder!</small>"
COM_GETBIBLE_PROMPT_OPENAI_PROMPTS_PLACEHOLDERS_BASIC_CACHING_NOTE_DESCRIPTION="<p>You can use the following placeholders in the prompts:</p><div><code style='display: inline-block; padding: 2px; margin: 3px;'>[translation_language]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[translation_lcsh]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[selected_word_text]</code></div><small>Utilizing these placeholders is crucial to enhancing the distinctiveness of responses from Open AI. Please not that <b>only these</b> placeholders are available in basic caching mode.</small>"
COM_GETBIBLE_PROMPT_OPENAI_PROMPTS_PLACEHOLDERS_NONE_CACHING_NOTE_DESCRIPTION="<p>You can use the following placeholders in the prompts:</p><div><code style='display: inline-block; padding: 2px; margin: 3px;'>[translation_name]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[translation_language]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[translation_lcsh]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[translation_abbreviation]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[book_name]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[chapter_number]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[chapter_name]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[chapter_text]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[verse_number]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[verse_name]</code><code style='display: inline-block; padding: 2px; margin: 3px;'>[verse_text]</code><code class='selected-word-placeholder' style='display: inline-block; padding: 2px; margin: 3px;'>[selected_word_number]</code><code class='selected-word-placeholder' style='display: inline-block; padding: 2px; margin: 3px;'>[selected_word_text]</code></div><small>Utilizing these placeholders is crucial to enhancing the distinctiveness of responses from Open AI. Be aware that using the <b>[chapter_text]</b> placeholder loads the complete text of the chapter. This significantly increases the size of the query and might cause failures in certain circumstances. Exercise caution and restraint when implementing the <b>[chapter_text]</b> placeholder!</small>"
COM_GETBIBLE_PROMPT_OPENAI_PROMPTS_PLACEHOLDERS_NONE_CACHING_NOTE_LABEL="Prompts Placeholders (No Caching)"
COM_GETBIBLE_PROMPT_PRESENCE_PENALTY_NOTE_DESCRIPTION="<p>The "presence_penalty" is an optional parameter that defaults to 0. This is a value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.</p><ul> <li>For example, a high presence penalty encourages the model to generate text involving a wider variety of topics or themes, rather than focusing on a single topic or repeatedly using the same phrases.</li></ul>"
COM_GETBIBLE_PROMPT_RESPONSE_RETRIEVAL_DESCRIPTION="Choose how cached responses are delivered: either a 'Total Retrieval' of all relevant cached responses or a 'Random Retrieval' of a single relevant response from the cache."
COM_GETBIBLE_PROMPT_RESPONSE_RETRIEVAL_NOTE_DESCRIPTION="<p>The 'Response Retrieval' feature provides you with control over how cached responses are served when a cache hit occurs. You have two options to choose from:</p><ul><li><strong>Total Retrieval:</strong> This option delivers all cached responses that are relevant to a query. For instance, if multiple unique responses are stored in the cache for a particular word or verse, 'Total Retrieval' will present all of these responses.</li><li><strong>Random Retrieval:</strong> With this selection, the system will pick and present a single relevant response from the cache at random for a given query.</li></ul><p>These settings can be applied to both the 'Basic Caching - Words/Language' and the 'Advanced Caching - Verse/Context' strategies, adding an extra layer of flexibility to how you manage and utilize your cached data.</p><p>Remember, the choice between 'Total Retrieval' and 'Random Retrieval' can impact the user experience. 'Total Retrieval' may provide a more comprehensive overview of possible responses, while 'Random Retrieval' might offer a more streamlined and varied experience.</p>"
COM_GETBIBLE_PROMPT_TEMPERATURE_NOTE_DESCRIPTION="<p>The "temperature" is a parameter that controls the randomness of the model's output. Its value ranges between 0 and 2. A higher temperature value results in more randomness, while a lower value results in less randomness. This affects the selection of the next word during text generation.</p><ul> <li>At a higher value like 2, the model has a greater probability of picking less likely words, which may lead to more diverse and creative outputs.</li> <li>At a lower value like 0.2, the model's output becomes more deterministic, primarily choosing words with the highest predicted probabilities, leading to more focused and predictable responses.</li></ul>"
COM_GETBIBLE_PROMPT_TOP_P_NOTE_DESCRIPTION="<p>The "top_p" parameter is used for "nucleus sampling," an alternative to the temperature-based sampling. It defines a threshold for the cumulative probability of the chosen tokens. Rather than considering all possible tokens for the next word, it only considers the smallest set of tokens whose cumulative probability exceeds the set "top_p" value.</p><ul> <li>Setting "top_p" to 0.1 means the model will only consider the tokens comprising the top 10% probability mass for the next word.</li> <li>If "top_p" is set to 0.9, the model considers a wider range of tokens for the next word but still limits to those within the top 90% of the probability distribution.</li></ul>"
COM_GETBIBLE_PROMPT_TOP_P_NOTE_LABEL="Top P"
COM_GETBIBLE_PROMPT_TOP_P_OVERRIDE_DESCRIPTION="Would you like to override the global top p value."
COM_GETBIBLE_PROMPT_TOP_P_OVERRIDE_LABEL="Top P"
COM_GETBIBLE_PROMPT_TOTAL="Total"
COM_GETBIBLE_PROMPT_USER="user"
COM_GETBIBLE_PROMPT_USE_GLOBAL="Use Global"
COM_GETBIBLE_PROMPT_VERSEBASED="Verse-Based"
COM_GETBIBLE_PROMPT_VERSION_DESC="A count of the number of times this Prompt has been revised."
COM_GETBIBLE_TAGGED_VERSE_SAVE_WARNING="Alias already existed so a number was added at the end. You can re-edit the Tagged Verse to customise the alias."
COM_GETBIBLE_TAGGED_VERSE_STATUS="Status"
COM_GETBIBLE_TAGGED_VERSE_TAG_LABEL="Tag"
COM_GETBIBLE_TAGGED_VERSE_VERSE_DESCRIPTION="Select the verse number"
COM_GETBIBLE_TAGGED_VERSE_VERSE_LABEL="Verse"
COM_GETBIBLE_TAGGED_VERSE_VERSION_DESC="A count of the number of times this Tagged Verse has been revised."
COM_GETBIBLE_TAGGED_VERSE_VERSION_LABEL="Version"
COM_GETBIBLE_TAGS="Tags"
COM_GETBIBLE_TAGS_ACCESS="Tags Access"
COM_GETBIBLE_TAGS_ACCESS_DESC="Allows the users in this group to access access tags"
COM_GETBIBLE_TAGS_BATCH_OPTIONS="Batch process the selected Tags"
COM_GETBIBLE_TAGS_BATCH_TIP="All changes will be applied to all selected Tags"
COM_GETBIBLE_TAGS_BATCH_USE="Tags Batch Use"
COM_GETBIBLE_TAGS_BATCH_USE_DESC="Allows users in this group to use batch copy/update method of batch tags"
COM_GETBIBLE_TAGS_CREATE="Tags Create"
COM_GETBIBLE_TAGS_CREATE_DESC="Allows the users in this group to create create tags"
COM_GETBIBLE_THE_NOTE_WAS_SUCCESSFULLY_CREATED="The note was successfully created."
COM_GETBIBLE_THE_NOTE_WAS_SUCCESSFULLY_UPDATED="The note was successfully updated."
COM_GETBIBLE_THE_NOTICE_BOARD_IS_LOADING="The notice board is loading"
COM_GETBIBLE_THE_README_IS_LOADING="The readme is loading"
COM_GETBIBLE_THE_TAG_SELECTED_IS_NOT_ACTIVE_PLEASE_SELECT_AN_ACTIVE_TAG="The tag selected is not active, please select an active tag."
COM_GETBIBLE_THE_TAG_WAS_SUCCESSFULLY_REMOVED_FROM_THE_VERSE="The tag was successfully removed from the verse."
COM_GETBIBLE_THE_TAG_WAS_SUCCESSFULLY_SET="The tag was successfully set."
COM_GETBIBLE_THE_VERSE_WAS_SUCCESSFULLY_TAGGED="The verse was successfully tagged."
COM_GETBIBLE_THE_WIKI_CAN_ONLY_BE_LOADED_WHEN_YOUR_JCB_SYSTEM_HAS_INTERNET_CONNECTION="The wiki can only be loaded when your JCB system has internet connection."
COM_GETBIBLE_THE_WIKI_IS_LOADING="The wiki is loading"
COM_GETBIBLE_THIS_IS_A_GLOBAL_TAG_SET_BY_US_AT_BSB_FOR_YOUR_CONVENIENCE_WE_HOLD_THE_PRIVILEGE_TO_MODIFY_THESE_TAGS_IF_YOU_BELIEVE_ITS_SET_IN_ERROR_KINDLY_INFORM_US="This is a global tag, set by us at <b>%s</b> for your convenience. We hold the privilege to modify these tags. If you believe it's set in error, kindly inform us."
COM_GETBIBLE_THIS_TAG_COULD_NOT_BE_REMOVED="This tag could not be removed."
COM_GETBIBLE_TRANSLATION_SAVE_WARNING="Alias already existed so a number was added at the end. You can re-edit the Translation to customise the alias."
COM_GETBIBLE_TRANSLATION_SHA_DESCRIPTION="Enter some checksum sha"
COM_GETBIBLE_TRANSLATION_SHA_LABEL="Checksum"
COM_GETBIBLE_TRANSLATION_SHA_MESSAGE="Error! Please add some checksum sha here."
COM_GETBIBLE_YOU_ARE_CURRENTLY_VIEWING_THE_TRASHED_ITEMS="You are currently viewing the trashed items."
COM_GETBIBLE_YOU_ARE_CURRENTLY_VIEWING_THE_TRASH_AREA_AND_YOU_DONT_HAVE_ANY_ITEMS_IN_TRASH_AT_THE_MOMENT="You are currently viewing the trash area, and you don't have any items in trash at the moment!"
COM_GETBIBLE_YOU_CAN_DIRECTLY_DOWNLOAD_THE_LATEST_UPDATE_OR_USE_THE_JOOMLA_UPDATE_AREA="You can directly download the latest update, or use the Joomla update area."
COM_GETBIBLE_YOU_WILL_HAVE_TO_ENABLE_OPEN_AI_IN_THE_GLOBAL_OPTIONS_OF_YOUR_COMPONENT_SINCE_IT_IS_CURRENTLY_DISABLED="You will have to enable Open AI in the global options of your component, since it is currently disabled."