SaaS / Hosted Monthly Release Notes - January 2020 (10.1.009 - 10.2.000)

Imports: Run indexing and enrichment using an import job

The Imports feature now allows you to request an indexing and enrichment job after an import job completes. On the Case Home > Manage Documents > Imports page, the Import Details page contains an option to Run indexing and enrichment, as shown in the following figure.

Import Details page

Selecting this option will run an indexing and enrichment job immediately after an import job completes. After adding a new import job, you can verify the selection of this option by clicking on the Import ID for that job and looking under the Import Details section of the Properties page, as shown in the following figure. The Run Indexing and Enrichment property indicates Yes if selected, or No if not selected.

Images and Natives Properties page

Ingestions: Add new system fields for ingestions

We have added the following three system fields to the Ingestions feature:

  • [Meta] Message Class: The message class MAPI property for email files. By default, this field is checked on the Customize Fields page in the Advanced Settings window for ingestions.
  • [Meta] PDF Properties: Extracted properties specific to PDF files. Most files will have multiple properties. Each value in this field has the name of the property followed by the value for that property. By default, this field is checked on the Customize Fields page in the Advanced Settings window for ingestions.
  • [Meta] Transport Message Headers: The message header for email files. By default, this field is unchecked on the Customize Fields page in the Advanced Settings window for ingestions.

Ingestions: NIST list updated - September 2019

Ingestions now uses an updated version of this list, released in September 2019. For more information, go to https://www.nist.gov/itl/ssd/software-quality-group/national-software-reference-library-nsrl.

Ingestions: Improvements to functionality and performance

Ingestions now uses the Nuix Workstation 8.2 processing engine. As a result, improvements to Ingestions include the following.

  • Handling of OneNote files is improved.
    • More content and attachments are extracted from OneNote data.
  • Support has been added for HEIC/HEIF file formats.
  • CAD drawing attachments are no longer treated as immaterial.
  • General improvements have been made to processing EnCase L01 files.

For a full list of features, see the Nuix Workstation 8.2 documentation.

Ingestions: Add error message information for corrupt documents

When the application encounters an ingestions error because of a corrupt document, information about that error appears in the [RT] Ingestion Detail field.

Load File Templates: Add new fields to the Variable builder for Load file templates

We have added two new expressions as options for load file template field values: Attach Count and Attach Filenames. These options are available for both general and production load file templates.

  • The Attach Count expression returns the number of immediate attachments associated with a parent document. If there are no immediate attachments, no value will be returned in the field.
  • The Attach Filenames expression lists the file names for immediate attachments associated with a parent document. The file name values are from the [Meta] File Name field. If there are no immediate attachments, no value will be returned in the field.

Processing > Jobs: Gather case metrics job captures total file size of base documents for non-document entity items

When you run a Gather case metrics job, in addition to capturing the file size of image, native, and content files associated with base documents, the application now also captures the total file size of the image, native, and content files associated with non-document entity items. This information appears in the Base documents (GB) column on the Portal Management > Reports > Hosted Details page.

Connect API Explorer: GraphQL and GraphQL Parser version upgrade

Connect API Explorer now contains the latest upgraded version of GraphQL (v2.4.0) and GraphQL Parser (v4.1.2). These upgrades require a few minor changes to your existing API queries and codes that are declaring Date variables.

In any existing API queries, the Date variable needs to change from Date to DateTime. The following figure is an example of an existing query declaring a Date variable before the upgrade.

Connect API Explorer API page showing Date variable

This next figure shows the needed change for the upgraded version of GraphQL.

Connect API Explorer API page showing DateTime variable

Connect API Explorer: API token enhancements

Newly created API authorization tokens no longer require separate API keys and will never expire. On the User Administration > API Access page, the API key label now shows the following message: The API key is not required for new authorization tokens.

The API authorization changes are backward compatible to accept existing authorization tokens, which will expire after three years.

To get a new key for an existing user, on the User Administration > API Access page, clear the Authorize this user to use the Connect API check box. Then select this option again to reactivate their authorization.

Connect API Explorer: New userAdd mutation

The new mutation userAdd allows the addition of new user accounts using the API. The following lists the accepted input data for this mutation.

  • firstName: Required data.
  • lastName: Required data.
  • username: Required data.
  • password: Required data.
  • email.
  • licenses: Default is Yes.
  • forceReset: Default is Yes.
  • portalCategory: Required and follows the same rules as in the user interface (UI) of what the user passing in the mutation can assign.
  • organizationID: Follows the same rules as in the UI of what the user passing in the mutation can assign.
  • companyID.
  • addtoActiveDirectory: Required and default is Yes.

The following is an example of how to use this mutation.

Sample Mutation:

mutation newuser {
  userAdd(input: {firstName: "new", lastName: "user", userName: "newuser", password: "Qwerty12345", email: "newuser@user.com", forceReset: false, portalCategory: PortalAdministrator, licenses: 1, addToActiveDirectory: true}) {
    users {
      id
      organizations {
        name
        id
        accountNumber
      }
      identityProvider
      userName
      fullName
      companyName
    }
  }
}

Connect API Explorer: New userDelete mutation

The new mutation userDelete allows the deletion of user accounts using the API so that you can integrate your user management application with Nuix Discover. The following lists the accepted input data for this mutation.

  • If all users exist, executing the userDelete mutation with single or multiple userid values will delete all specified users.
  • If some users do not exist, executing the userDelete mutation with single or multiple userid values will delete the specified valid users. In return, the user id values as null.
  • If no users exist, executing the userDelete mutation with single or multiple userid values will return, the user id values as null.

Fields:

  • userID: An integer that identifies the user in the portal.

The following is an example of how to use this mutation.

Sample Mutation:

mutation userDelete {
  userDelete(input: {userId: [231]}) {
    users {
      id
    }
  }
}

Connect API Explorer: Access and download API documentation

There are two new buttons available on the Connect API Explorer page, as shown in the following figure.

API Download and Open Docs buttons

The Open docs button accesses additional API documentation that contains more in-depth guidance on creating and handling queries and mutations. When you click the Open docs button, the Connect API Documentation tab appears containing the API documentation, as shown in the following figure. On the left are active links that access individual topics. Clicking these links will scroll the page up or down to the selected topic.

API Documentation

Note: The top-right corner of Connect API Documentation tab shows your specific URL location of the documentation and the current version of the document.

To download the documentation, click Download docs. This downloads the documentation as a Hypertext Markup Language (HTML) page for viewing in any browser window.

Import API: Run indexing and enrichment using createImportJob mutation

The createImportJob mutation now contains a parameter for running an indexing and enrichment job after an import job completes.

  • Name: runIndexing
  • Type: Boolean
  • Required: No
  • Default: false

The following is an example of how to use this parameter.

Sample mutation:

mutation {
  createImportJob (
    caseId:26,
    options:{
      name:"My Import Job",
      description:"Import job description",
      level:"Imports/Custodian A/0001",
      docsPerLevel:1000,
      updateGroupCoding:true,
      runIndexing:true
    }
  )
  {
    rdxJobId
  }
}

Note: If this parameter is set to true, an indexing and enrichment process will run after the import job.

Import API: Run deduplication in import job

The createImportJob mutation now allows the option to suppress documents from the import job as duplicates. When the runDeduplication parameter is set to true, the job will use the deduplication settings associated with Ingestions processing as follows:

  • Use the default setting for Case or Custodian. If there is no default setting, use Case.
  • Use the default setting for Only use the top parent documents to identify duplicates. If there is no default setting, use False.
  • Do not retain suppressed files regardless of the setting.

The following are some additional considerations that will take place during processing:

  • The Imports feature codes all imported documents with a Yes in the Exclude from Ingestions Deduplication field. Coding of this field will not take place if selecting deduplicate and the setting is Case or Custodian.
  • The files within suppressed documents will not transfer.
  • If suppressing a document that contains an existing document ID in main_suppressed, the application returns the following message: Document <doc ID> was identified as a duplicate to be suppressed, but it was not suppressed because a document with the same Document ID has already been suppressed in this case.

In the createImportJob mutation, add one or more of the following parameters under options:

  • Name: runDeduplication
  • Type: boolean
  • Required: No
  • Default: False

Note: Select runDeduplication to run deduplication on the documents within this import, and to suppress duplicates. This process will use the deduplication settings for Ingestions.

The following is an example of how to use these parameters.

Sample mutation:

mutation {
  createImportJob (
    caseId:26,
    options:
    {
      level:"Imports",
      docsPerLevel:1000,
      updateGroupCoding:True,
      runDeduplication:True
    }
  )
  {
    rdxJobId
  }
}

On the Properties page for an import job, found on the Case Home > Manage Documents > Imports page, there is a new row under Statistics that reports on the number of suppressed documents, as shown in the following figure. This new row will only appear when using the deduplication option. If no duplicates are found, the value will appear as zero.

Import Job Statistices data

Import API: Assign sequential document IDs in an import job

The createImportJob mutation now contains parameters for assigning sequential document ID values for documents in the job.

  • Name: documentIdFormat
  • Valid values: Sequential or Existing
  • Required: No
  • Default: Existing

Note: Use a value of Sequential to have the application reassign document ID values for the documents within this import. Assignment of document IDs uses the provided prefix beginning with the next available document ID number matching that prefix and incrementing by 1 for each document.

  • Name: documentIdPrefix
  • Type: String
  • Required: No

Note: This is static text that appears at the beginning of each document ID only when using Sequential for the documentIdFormat option. If you do not provide this option, the application will use the document ID prefix setting from the Ingestions default settings.

When the documentIdFormat option is Sequential, the job generates a new document ID for all documents within the job. The generated ID will consist of a prefix from documentIdPrefix and a number value padded to nine digits beginning with the next available number in the case with the same prefix.

Document source and attachment relationships generate using the references in parentId based on the provided document ID values. If using sequential renumbering, document source and attachment relationships will generate only based on the parentId references within this job. Documents will not attach to prior existing documents.

If the document contains only one page, the page label will match the document ID. For documents containing multiple pages, the page labels update as DocID-00001, DocID-00002, DocID-00003, consecutively to the last page.

For files that are in pages, the page file name will match the existing page label such as DocID-00001.tif, DocID-00002.tif, and so on. For files not in pages, the file is named after the document ID, like DocID.xls.

The following is an example of how to use these parameters.

Sample mutation:

mutation {
  createImportJob (
    caseId:26,
    options:
    {
      level:"Imports",
      docsPerLevel:1000,
      updateGroupCoding:True,
      documentIdFormat:Sequential,
      documentIdPrefix:"Doc_"
    }
  )
  {
    rdxJobId
  }
}

Import API: Transfer files from S3 in createImportJob mutation

The createImportJob mutation now contains parameters to transfer files from S3.

  • Name: fileTransferLocation
  • Valid values: AmazonS3 or Windows
  • Required: No
  • Default: Windows

Note: The default is Windows. When selecting Windows, the files copy from the file repository designated for Images under the import\<case name> folder. When selecting AmazonS3, this mutation returns information needed to access the S3 bucket.

These Options parameters will allow you to request transfer of the following S3 return values within the fileTransferLocationInformation parameter:

  • accessKey
  • secretAccessKey
  • token
  • repositoryType
  • regionEndpoint
  • bucketName
  • rootPrefix
  • expiration

Note: When the fileTransferLocation is AmazonS3, the mutation copies the files from the Amazon S3 bucket and folder created for the job rather than from the import folder on the agent.

The following is an example of how to use these parameters.

Sample mutation:

mutation {
  createImportJob (
    caseId:26,
    options:
    {
      level:"Imports",
      docsPerLevel:1000,
      updateGroupCoding:True,
      fileTransferLocation:AmazonS3
    }
  )
  {
    rdxJobId
    fileTransferLocationInfo
    {
        accessKey
        secretAccessKey
        token
        repositoryType
        regionEndpoint
        bucketName
        rootPrefix
        expiration
    }    
  }
}

Sample returned data:

{
  "data": {
    "createImportJob": {
      "rdxJobId": 1040,
      "temporaryFileTransferLocationConnectInfo": {
        "accessKey": "AEK_AccessKeyId",
        "secretAccessKey": "AEK_SecretAccessKey",
        "token": "AEK_SessionToken",
        "repositoryType": "AmazonS3",
        "regionEndpoint": "AEK_Region",
        "bucketName": "AEK_Bucket",
        "rootPrefix": "AEK_JobPrefix",
        "expiration": "2019-11-27T07:04:29.601994Z"
      }
    }
  }
}

Import API: New importJobS3Refresh mutation to refresh S3 credentials

The new mutation called importJobS3Refresh allows you to refresh credentials for an S3 folder created as part of an import job. These credentials expire after 12 hours. However, it is possible that transfer of files will continue past this time frame.

The importJobS3Refresh mutation passes the caseId and rdxJobId that allows you to look up the folder information. The mutation also passes the original accessKey and the original secretAccessKey that validate and match the originally provided keys as an additional security measure.

The following describes the mutation and parameters:

  • importJobS3Refresh: Obtains new file transfer location information for an existing import job.
  • accessKey (parameter): Uses the accessKey value previously returned for this import job.
  • secretAccessKey (parameter): Uses the secretAccessKey value previously returned for this import job.

If there is no S3 information for the provided job ID, the application returns the following error: There is no information available for this rdxJobId. If the accessKey or secretAccessKey does not match, the application returns the following error: The keys provided do not match the keys for this rdxJobId.

The following is an example of how to use these parameters and the possible returned data.

Sample mutation:

mutation {
  importJobS3Refresh (
    caseId:26,
    rdxJobId:324,
    accessKey:"AEK_AccessKeyId_Old",
    secretAccessKey:"AEK_SecretAccessKey_Old"
  )
  {
    rdxJobId
    fileTransferLocationInfo
    {
        accessKey
        secretAccessKey
        token
        repositoryType
        regionEndpoint
        bucketName
        rootPrefix
        expiration
    }    
  }
}

Sample returned data:

{
  "data": {
    "importJobS3Refresh": {
      "rdxJobId": 1040,
      "fileTransferLocationInfo": {
        "accessKey": "AEK_AccessKeyId",
        "secretAccessKey": "AEK_SecretAccessKey",
        "token": "AEK_SessionToken",
        "repositoryType": "AmazonS3",
        "regionEndpoint": "AEK_Region",
        "bucketName": "AEK_Bucket",
        "rootPrefix": "AEK_JobPrefix",
        "expiration": "2019-11-27T07:04:29.601994Z"
      }
    }
  }
}

Import API: Modifications to parameter requirements in FieldParams

The following are changes to the type and onetomany field parameters. FieldParams no longer requires these parameters.

  • When not providing the type field parameter, the application will match on the field name only.
    • If no match is found, the application records the following error: The value for field <field name> for document <Document ID> was not imported. No such field exists, and no field type was provided to create a new field.
    • If a match is found on multiple existing fields, data will not import, and the application records the following error: The value for field <field name> for document <Document ID> was not imported. Multiple fields exist with the name provided, and no field type was provided.
  • When not providing the onetomany field parameter, if no match is found on the field name, the application creates a new field as one-to-many.

SaaS / Hosted Monthly Release Notes - December 2019 (10.1.005 - 10.1.008)

Analysis > Predictive Coding > Add custom Predictive Coding Templates

The Predictive Coding Templates page has been added to the Analysis capabilities in Nuix Discover and is available to all administrators. This page allows administrators to select the Standard or Standard + people template when setting up predictive coding or Continuous Active Learning (CAL) models, or to create their own templates.

Note: The Standard and Standard + people templates are available to all cases and cannot be modified.

Create a new Predictive Coding Template

To create a new template, go to the Case Home > Analysis > Predictive Coding Templates page and click Add. Add a name and description for the template, and then click Save. The Fields page opens for that template. To add fields to the template, select a field in the Add field list and click the + (plus sign) button.

Predictive Coding Templates Fields page Field selection

The following information applies to fields in a predictive coding template.

  • The values of date fields included in a template appear as text strings.
  • The weight for each field is 1 by default, but you can change the value to anything between 1 and 10. Weight reflects the amount of influence a field has on the model in relation to other fields in the template. For example, if you want People information to be more heavily considered in the model than other fields, adjust the weight value on the People fields to be higher than the other field weight values.
  • Predictive Coding Templates Fields page showing added field

The following information applies to all custom predictive coding templates.

  • Extracted text from documents is included in every template, although it is not listed as an item in the template. The training field for the model that the template is selected for is also included.
  • Once a template is being used by a CAL or predictive coding model, it cannot be edited. Open the template’s Properties page to view the names of the models that are using the template.
  • Predictive Coding Templates Properties page

Clone a Predictive Coding Template

All custom templates can be cloned, regardless of whether they are in use. To clone a template, open the Fields page for the template and click Clone template. Update the template name as needed and click Save. The Fields page for the new template opens. Add fields, delete fields, or change any of the field weights on that page.

Delete a Predictive Coding Template

You can delete any custom predictive coding template that is not in use by a predictive coding or CAL model. To delete a template, open the Fields page for the template and click Delete template.

Use Predictive Coding Templates with CAL

Administrators now have the option to select a predictive coding template when configuring training for a model. To select a template, go to the Case Home > Analysis > Populations and Samples page and select a population. Then, open the Predictive Coding page for the population and click Configure training. On the Settings page, select a template in the Predictive coding template list.

Configure training Settings page

Note: You can change the predictive model template throughout the lifecycle of the training model. However, at the present time, the application only provides data about the current template selected for training and does not record the history of different templates that have been selected.

Use Predictive Coding Templates with the Predictive Coding standard workflow

To select a predictive coding template to use when adding a predictive model, go to the Case Home > Analysis > Predictive Models page and click Add. In the Add Predictive Model dialog box, select a predictive coding template in the Predictive coding template list.

Add Predictive Model page

Portal Management > Processing > Jobs: Size of Elasticsearch index captured during Gather case metrics job

If a case uses an Elasticsearch index, the Gather case metrics job now captures the size of the Elasticsearch index. The Elasticsearch index is used to capture the coding audit history.

Portal Management > Reports: Elasticsearch index size available in the Hosted Details report

If a case uses an Elasticsearch index, you can view the size of the Elasticsearch index for a case on the Reports > Hosted Details page. The name of the new column is Elasticsearch index (GB). The Elasticsearch index is used to capture the coding audit history.

Connect API: New case statistic in the API {cases{statistics}} query

The Nuix Discover Connect API contains a new sizeOfElasticSearchIndex field that returns the total size of the Elasticsearch index for cases. The Elasticsearch index stores the audit history records for coding changes that are viewable within the Coding History pane.

The following example uses the new sizeOfElasticSearchIndex field in the cases {statistics} object.

{
  cases {
    name
    statistics {
      sizeOfElasticSearchIndex
    }
  }
}

The sizeOfElasticSearchIndex field is also part of the aggregateTotalHostedSize statistic that returns the sum of sizeofBaseDocumentsHostedDetails, sizeofRenditionsHostedDetails, aggregateDatabases, sizeOfElasticsearchIndex, dtIndexSize, sizeOfNonDocumentData, and sizeOfOrphanFiles.

SaaS / Hosted Monthly Release Notes - November 2019 (10.1.001 - 10.1.004)

Portal Management > Reports: Change the time zone

You can now change the time zone for the data that appears on the Portal Management > Reports > Usage and Hosted Details pages from local time to Coordinated Universal Time (UTC). Using UTC time allows the reports to display data consistently with reports that are generated through the API when querying for specific dates or date ranges. By default, the data appears in local time.

Use the following procedure to change the time zone from local time to UTC.

  1. On the Portal Management > Reports > Usage or Hosted Details page, on the toolbar, click the Time zone button.
  2. In the Time zone dialog box, shown in the following figure, select UTC time.
  3. Time Zone dialog box
  4. Click OK.
  5. The data displayed is then based on UTC time.

Portal Management > Reports: Subtotal column added to Hosted Details report

The Portal Management > Reports > Hosted Details page now includes a Subtotal (GB) column.

Note: The label for the Total size (GB) changed to Total (GB).

In the Subtotal (GB) column, you can view a subtotal of the active data, which includes the data in the following columns:

  • Base documents (GB)
  • Production renditions (GB)
  • Databases (GB)
  • Content index (GB)
  • Predict (GB)
  • Orphan (GB)

Portal Management > Settings > Log Options: Download a telemetry log file

The Portal Management > Settings > Log Options page includes a new button on the toolbar named Download log that you can use to download a telemetry log file. The application downloads the telemetry log data to a .log text file.

To keep the file size manageable, you can configure the number of records to maintain in the JSON string in the Telemetry archive configuration setting on the Portal Management > Settings > Log Options page. For example, as shown in the following figure, NRecentRecordsToReturn is set to 10000.

Telemetr archive configuration setting

SaaS / Hosted Monthly Release Notes - October 2019 (10.0.009 - 10.1.000)

Audio: Resubmit multiple previously transcribed documents

You can now resubmit audio documents to generate new transcriptions using the Transcribe audio option on the Tools menu. Doing so can be useful if you selected the wrong language model when you transcribed audio documents, or if errors occurred during the transcription job.

Before you resubmit previously transcribed documents, note the following:

  • After you resubmit the audio documents, the application removes any corrections that were made in the previous transcriptions.
  • You cannot resubmit documents that have annotations. Delete the annotations first.

Use the following procedure to resubmit previously transcribed audio documents.

  1. On the Tools menu, select Transcribe audio.
  2. In the Transcribe audio dialog box, shown in the following figure, do the following:
  3. Transcribe audio confirmation message
    • Under Language model, select the language. You can select one of the following audio language models:
      • Arabic (Modern Standard)
      • Brazilian Portuguese
      • Chinese (Mandarin)
      • English (UK)
      • English (US)
      • French
      • German
      • Japanese
      • Korean
      • Spanish
    • Under Optional inclusions, select the check boxes for the documents that you would like to resubmit.
  4. Click OK.

Tools > OCR processing: Languages listed in alphabetical order in the OCR processing dialog box

In the OCR processing dialog box, available languages for OCR processing now appear in alphabetical order.

Ingestions: Show level settings in Add ingestion dialog box

In the Add ingestion dialog box, a read-only display of the default level settings for the case now appears under the Family deduplication setting.

For example, select the default settings for levels, as shown in the following figure.

Default settings Levels page

These levels appear in the Add ingestion dialog box under the Levels heading, as shown in the following figure.

Add ingestion dialog box

Exports: Updates to the MDB Classic export type

Two updates have been made to the MDB Classic export type in the Export window.

  • Administrators can export a production or a set of rendition documents. In previous releases, administrators could export only binders or base documents with this export type.
    • When creating an export from the Manage Documents page, administrators can select the MDB Classic export type.
    • When selecting rendition documents from search results for export using the Tools > Export menu option, administrators can select the MDB Classic export type from the Export type list.
  • Administrators can choose to populate the pages table of an MDB export file even if no files are selected for export.
    • If an administrator selects the option to export an MDB load file in the Export window but does not select any files to export, the pages table of the exported MDB file will be empty by default. However, administrators can now populate the pages table of the MDB file anyway. On the Load files page, in the Settings window (available when you click the Settings button, or gear), select the Populate the pages table of the MDB even if no files are selected for export check box.
    • Export Renditions Load files page Settings options

SaaS / Hosted Monthly Release Notes - September 2019 (10.0.005 - 10.0.008)

Audio pane: Select a language model to use for transcription

You can now specify the language model to use for transcription. For example, if you know that the audio in a file uses British English instead of American English, you can select English (UK) as the source language before you transcribe the audio file.

To specify the language model for an individual file, select a file, and then click the Transcribe audio button in the Audio pane. In the Transcribe audio dialog box, select an option from the Language Model list, and then click OK.

Transcribe audio dialog box Language selection

To specify the language model for multiple files, select the files. On the Tools menu, select Transcribe audio. In the Transcribe audio dialog box, select an option from the Language Model list, and then click OK.

Transcribe audio dialog box Language model selection

You can select one of the following audio language models:

  • Arabic (Modern Standard)
  • Brazilian Portuguese
  • Chinese (Mandarin)
  • English (UK)
  • English (US)
  • French
  • German
  • Japanese
  • Korean
  • Spanish

Audio pane: Resubmit transcribed audio file

If you accidentally selected the wrong language model when you transcribed an audio file, you can click the Transcribe audio button in the Audio pane to resubmit the transcription using a different language model, as shown in the following figure.

Note: This functionality is not yet available for multiple files using the Tools > Transcribe audio option.

Transcribe audio dialog box confirmation message

Note: You cannot re-transcribe a file that has annotations. Delete the annotations first.

Coding History: Case administrators can see all records regardless of group membership and security

Case administrators can see all history records, including records for deleted objects, in the template views in the Coding History pane, regardless of their group membership and the group security settings for objects such as binders, fields, or productions.

Case Setup > System Fields: New system field for Audio Language Model

A new system field named Audio Language Model is available on the Case Setup > System Fields page.

Note:The application disables this field for groups by default, and you cannot grant groups write access to this field.

The application populates this field after a user submits an audio transcription from the Audio pane or from the Tools > Transcribe audio menu. The field value is the name of the language selected in the Language Model list for the audio transcription.

Audio Language model Items page

Manage Documents > Exports: Enhancements and changes to the Exports feature

You will now get the same export results regardless of the way that you choose to submit the export job. You can submit export jobs on the Manage Documents > Exports page or by using the Tools > Export feature on the Documents page.

Only administrators can export documents from the Manage Documents > Exports page. In addition, the user interface used in the Tools > Export feature is now also used on the Manage Documents > Exports page and includes the same options for administrators.

Major enhancements and changes

  • When exporting on the Manage Documents > Exports page, you can now export more than one load file at a time.
  • For base documents, you can select options to convert image files to PDF or TIFF.
  • For any load file field references to files, for page or document load files, the application now populates load file fields based on the files exported along with load files. This is different than how the Manage Documents > Exports feature worked previously for page load files. For example, in the legacy code, if you exported an MDB load file on its own, but no other files, the pages table would reflect main_pages for the documents in the export. In the updated code, if you export an MDB with no files, no updates occur to the pages table.

Other enhancements and changes

  • Exported files will exist in a folder named according to the Export name and ID under the export folder. However, as shown in the following figure, you can select a repository from the File repository list and, under Output folder path, you can also export to an existing folder instead. To select a file repository or an existing folder, on the Define export page, click the Settings (gear) button to open the Settings window, as shown in the following figure.
  • Export page Settings Options
  • When exporting using the Manage Documents > Exports feature, on the Source page, you can choose to export a Binder of documents or a Production. Depending on whether you select Binder or Production on the Source page, the options on subsequent pages will differ. This is similar to how the options change in the Exports > Tools window depending on whether you select base or rendition documents.
  • Note: This page is not enabled when using the Tools > Export feature on the Documents page because that export is based on documents selected in a search result.

    Export window Source Options
  • A new Image settings page replaces the PDF settings page.
    • For image files, users can select the option to convert images to non-searchable PDFs or to convert PDFs to TIFF. These options were previously available for production exports from the Manage Documents > Exports page and are now options for base document exports as well. If the document set already consists of PDFs, you can select the following option from the Image format list: Embed OCR text in existing PDFs. Selecting this option will not create searchable PDFs from non-PDF files.
    • Note: The Embed OCR text in existing PDFs option is available only if the Enable PDF annotations option is set for the case.

      Export window Image Settings options
  • A new Export type named MDB Classic is now available to administrators on the Define export page, as shown in the following figure.
  • Export window Define Export options
    • The MDB Classic export type makes file selection and MDB page table updates more consistent with results. This is similar to using IEM in the past.
    • On the File types page, instead of selecting the options to export endorsable images, native, and content (.txt) files, you can now choose to export Imaged pages or Content files, as shown in the following figure. If you select Imaged pages, the application exports all of the files that you can see in the Image viewer in the View pane. If you select Content files, the application exports all of the files that you can see in the Native viewer in the View pane.
    • Export window Select file types options
    • Just like for the Custom export type, on the Annotations page in the Export window, users can choose to endorse footers and annotations.
    • The options for omitting other files when a file is annotated are slightly different than the omit file options for Custom export types. For the MDB Classic export type, the default options are as follows:
      • Omit other page files if document images are annotated: When this option is selected, only the annotated files are exported. The application will exclude any other page files from the export.
      • Omit content files if document is annotated: When this option is selected, the application excludes all content files from the export.
      • Export window Apply annotations options
    • For the MDB Classic export type, you can select only MDB load files for export with the files. By default, if exporting files, the pages table of the MDB will mirror the main_pages table in the application, that is, what is seen in the Image viewer in the View pane.
    • A new option, shown in the following figure, is available for the MDB Classic export type. If needed, click the Settings (gear) button to select the following option:
      • Associate all exported files for a document in the pages table. If you select this option, all files exported will be represented in the pages table of the MDB, even if they did not exist in the main_pages table.
      • Export window Include load file options

Export Feature Summary

  • The Export feature on the Manage Documents page is available only to administrators and is always available to administrators.
    • The export set is based on a selected Binder or Production.
    • No group security is enabled for the items listed for selection. All Binders, Productions, Fields, and Annotations are listed as options.
  • The Export feature, which is an option available on the Tools menu on the Documents page, is available only if the user’s group is set to Allow on the Security > Features page for the Processing – Exports feature. The following additional information applies:
    • The export set is based on selected documents in search results.
    • Group security is enabled for the items listed for selection. Users will see fields or annotations that are allowed only for the group they are logged in as.
    • Non-administrators have access to only one export type, which is Native files only, no load file included.

The following list provides an overview of the use case, security, available file type options, and handling of base documents and renditions, as well as an overview of the updates to the MDB pages table for the three export types.

  • Export Type > Use case
    • Custom: Select this option if you want all available file options.
    • Native files only, no load file included: Select this option if you only want to export native files for a set of documents and nothing else.
    • MDB Classic: Select this option if you are loading the export to another Nuix Discover case and want the file organization or views to be the same in the target case.
  • Export Type > Security
    • Custom: Administrators only
    • Native files only, no load file included:
      • Available to administrators
      • Available to non-administrators who have access to the export feature
    • MDB Classic: Administrators only
  • Export Type > File type options available
    • Custom:
      • Endorsable image files: Any files in the Image viewer that are .tif, .tiff, .jpeg, .jpg, .bmp, .png, or .pdf (if PDF annotations are enabled in the case)
      • Native files: Highest-ranking non-txt file or file with an extension matching the field value (if specified)
      • Content files (.txt): Existing .txt file on fileshare or extracted text (for base documents)
    • Native files only, no load file included:
      • No selection available
      • The application will export only one native file per document
      • The native is the highest-ranking non-txt file or file with an extension matching the field value (if specified in case options)
    • MDB Classic:
      • Imaged pages: Any files in the Image viewer
      • Content files: Any files in the Native viewer
  • Export Type > Other options: Base documents
    • Custom:
      • Image format: Select to embed OCR text in existing PDFs, convert images to PDF, or convert PDFs to TIFF
      • Footers
      • Annotations
      • Load file: One MDB or any number of non-MDB load files
    • Native files only, no load file included:
      • Exported file structure:
        • As currently foldered in the case
        • Flattened
    • MDB Classic:
      • Image format: Select to embed OCR text in existing PDFs
      • Footers
      • Annotations
      • Load file: One MDB
  • Export Type > Other options: Rendition documents
    • Custom:
      • Image format: Select to embed text in existing PDFs, convert images to PDF, or convert PDFs to TIFF
      • Load file: One MDB or any number of non-MDB load files
    • Native files only, no load file included:
      • Exported file structure:
        • As foldered in the case
        • Flattened
    • MDB Classic: Not available for production renditions
  • Export Type > MDB pages table
    • Custom:
      • At least one file per document will be associated with a document in the pages table (as long as it was selected for export).
      • If endorsable images are exported, those will be associated with the document in the pages table.
      • If only a native file is exported for a document, it will be associated with the document in the pages table.
      • If only a content file is exported, the .txt file will be associated with the document in the pages table.
      • If you select the option to Update the pages table to mirror files in the image viewer, and if you select both endorsable images and natives for export, and both of those file types exist in the Image viewer for a document, then those files will all be associated with the document in the pages table.
      • If you do not select any files for export, the pages table will be empty.
    • Native files only, no load file included: Not applicable
    • MDB Classic:
      • The pages table will mirror files available in the Image viewer if you select Imaged pages to be exported.
      • If you do not select Imaged pages to be exported, no files will be referenced in the pages table.
      • If you select Content files to be exported as well as the option to Associate all exported files for a document in the pages table, then the content files exported will be referenced in the pages table.

Additional basic information about how exports work

  • The application copies the exported files to the case default file transfer file repository and a unique subfolder under the export folder. Administrators can change the file repository and select an existing subfolder to copy the files to.
    • The application names the subfolder under the export folder based on the export name and the export ID. The application names the load files according to the export name only, and not the export ID.
  • Exported file structure:
    • When exporting files with an MDB load file, files are exported in the same file structure as they exist in the case.
    • When exporting files with a non-MDB load file, files are separated into images, native, and text folders. However, if exporting a production from the Manage Documents page, the application respects the export path details in the production settings. Note that any system load file templates reference the default folder names of image, native, and text files.

Manage Documents > Ingestions: Improved handling of missing files in the ingest_temp folder during file transfer

In previous versions, the application could not complete the transfer of files during the ingestions process if any files were missing from the ingest_temp folder. This would often occur when files were quarantined by virus scanning software. In those instances, the application could not complete the ingestions job without manual intervention. With this release, if files cannot be copied because they do not exist in the ingest_temp location, the application does the following:

  • Creates a slipsheet for any missing file with the text “File not available to copy.” Copies the slipsheet to the proper location in the images folder and references the slipsheet in the main_pages table.
  • Codes the document with a value of “File Copy Failed” in the [Meta] Processing Exceptions field.
  • Codes the document with a value of “File not available in temporary folder” in the [RT] Ingestions Exception Detail field.
  • Updates the [Meta] File Extension - Loaded field with a value of “pdf.”
  • Codes the [Meta] File Extension - Original field with the extension of the original file.

Manage Documents > Ingestions: Support up to 10 levels

Administrators can now select up to ten levels on the Levels page in the Default settings window for Ingestions.

For each level, you can select one of the following options:

  • Constant: Enter a static value into the box.
  • Select a field: A list appears that allows you to select a field. You can select any one-to-one field that is selected on the Customize Fields page in the Advanced settings window.
  • None.
  • Existing levels: Select a level that already exists for the case.

Manage Documents > Load File Templates: Field name suffixes removed in the Variable Builder

In the Variable Builder for load file templates, the names of the field types (DATE, MEMO, NUMB, PICK, TEXT, YES/NO) no longer appear in the Name column.

Variable Builder Quick Picks tab

Portal Management > Settings: Enable telemetry logging from the portal database

You can write telemetry logging data to the portal database. This logging data includes all usage metrics and application errors for a portal.

The following settings are available on the Portal Management > Settings > Log Options page, as shown in the following figure.

Portal Home Settings page
  • Enable telemetry logging: Select this check box to enable logging for the portal.
  • Log detail level: Select an option to adjust the level of detail captured in the log: Error, Info, Debug, or Trace.
  • Log file location: If you provide a location, the telemetry data is stored in physical files on the web servers.
  • Max log files: Provide a value to indicate the number of archive (.archive.log) files to keep on the web servers.
  • Store logs in database: Select this check box to store log data in the portal database. If selected, an RPF job pushes the data to S3 and cleans up the database table per the configuration setting indicated in the Telemetry archive configuration setting.
  • Note: If this option is selected, and the Telemetry archive configuration setting is not configured, then no log entries will be deleted from the database table.

  • Telemetry archive configuration: The information in this setting controls the frequency of when the RPF job runs to upload log entries from the database table to S3 and clean up the portal database. This setting is a JSON string with the following fields:
  • {
      “Checkpoint”: “0”,
      “Key”: “AWS key”,
      “Secret”: “AWS secret key”,
      “Region”: “AWS region”,
      “Bucket”: “AWS S3 bucket name”,
      “CleanupMaxDays”: 30,
      “ScheduleId”: null,
      “IntervalInMinutes”: 60
    }
    • Checkpoint: Default to 0. This holds the value of the last successful upload to S3.
    • Key: AWS key
    • Secret: AWS secret key
    • Region: AWS region
    • Bucket: AWS S3 bucket
    • CleanupMaxDays: Cleans up database records that are older than this value.
    • ScheduleId: Defaults to null. This will be set by the RPF job and should not be modified manually.
    • IntervalInMinutes: Defaults to 60. This sets the frequency, in minutes, for the RPF scheduled job.
  • The Log Options page also includes the following additional changes:
    • The following two syslog options were removed:
      • Ringtail syslog server name
      • Ringtail syslog server port
    • Some options were renamed as follows:
      • Log enabled > Enable telemetry logging
      • Log level > Log level detail
      • Log location > Log file location
      • Max Archive Files > Max log files
      • Database log enabled > Store logs in database

Portal Management > Cases and Servers: Assign logical database names to cloned cases

In previous releases, when cloning a case, the application created database files using the name of the source case that was cloned, rather than the name of the new case. The database file names are now based on the name of the cloned case, not the original case.

Import API

There are three new mutations in the Nuix Discover Connect API for importing documents into a case: createImportJob, addDocumentsForImportJob, and submitImportJob.

Create an import job

You can create an import job in a case using the createImportJob mutation. This mutation returns the rdxJobID, which is used in the next mutation to add documents to the import job. This mutation also allows you to configure some job-level settings.

Sample mutation:

mutation {
  createImportJob (
    caseId:26,
    options:{
      name:“My Import Job”,
      description:“Import job description”,
      level:“Imports/Custodian A/0001”,
      docsPerLevel:1000
      updateGroupCoding:true
    }
  )
  {
  rdxJobId
  }
}

Sample response:

{
  “data”: {
    “createImportJob”: {
      “rdxJobId”: 319
    }
  }
}

Configurable options:

  • name: String is the name of the import job. If you do not provide a value for this option, the job name is “Import from API.”
  • description: String is the description of the import job. If you do not provide a value for this option, the job description is “Import from API.”
  • level: String determines the root level to put documents in. If you do not provide a value for this option, the level is “API/{ImportID}/0001.” Level values assigned to documents in the addDocumentsForImportJob mutation override this setting.
  • docsPerLevel: Int determines the maximum number of documents per level. If you do not provide a value for this option, the value is 1000.
  • updateGroupCoding: Boolean updates the group coding fields (All Custodians, for example) for new documents in this import and any existing or future family duplicate documents. If you do not provide a value for this option, the value is “false.”

Add documents to an import job

You can use the addDocumentsForImportJob mutation to add documents to an import job that was created using the createImportJob mutation. Each addDocumentsForImportJob mutation allows you to add up to 5000 documents. To add additional documents to the job, run multiple mutations with different documents.

Note: When defining the path value for pages and content files, the path is relative to the “import” folder in the Image file repository defined for the case.

For example, if the path is defined as follows:
path:“Imports\\Media0001\\Images\\0001\\DOC-00000001.tif”
then the file should be located at:
{Image file repository}\import\{case name}\Imports\Media0001\Images\0001\DOC-00000001.tif.

Sample mutation:

mutation {
  addDocumentsForImportJob (
    caseId:26,
    rdxJobId:319,
    documents:[
      {
      documentId:“DOC-00000001”,
      hash:“qwer1234asdf5678zxcv1234qwer5678”,
      familyhash:“poui1234asdf5678zxcv1234qwer5678”,
      level:“Imports/Custom/0001”,
      parentId:“”,
      sourceattachmentaction:Delete,
      pageaction:InsertUpdate
      mainfields:[
        {
          name:DocumentDate,value:“2019-01-03”,action:Update
        },
        {
          name:DocumentType,value:“Microsoft Outlook Message”,action:Update
        },
        {
          name:DocumentTitle,value:“Re: Your message”,action:Update
        },
        {
          name:DocumentDescription,value:“”,action:Delete
        },
        {
          name:EstimatedDate,value:“False”,action:Update       
        }
      ],
      fields:[
        {
          name:“Custodian”,onetomany:false,type:PickList,action:InsertUpdate,values:“Custodian A”
        },
        {
          name:“[Meta] Processing Exceptions”,type:PickList,action:InsertUpdate,values:[“Corrupted”,“Empty File”]
        },
        {
          name:“[Meta] File Name”,onetomany:false,type:Text,action:InsertUpdate,values:“Re: Your message.msg”
        },
        {
          name:“[Meta] File Path”,onetomany:false,type:Memo,action:InsertUpdate,values:“C:\\Downloads\\Email”
        },
        {
          name:“[Meta] File Size”,onetomany:false,type:Number,action:Delete,values:“1592”
        },
        {
          name:“[Meta] Date Sent”,onetomany:false,type:DateTime,action:InsertUpdate,values:“2019-01-03”
        },
      ],
      correspondence:[
        {
          type:“From”,people:“acustodian@example.com”,orgs:“example.com”,action:InsertUpdate
        },
        {
          type:“To”,people:“bsmith@example.com”,action:Append
        },
        {
          type:“CC”,people:[“kjohnson@example.com”,“ewilliams@example.com”],action:InsertUpdate
        }
        ],
        pages:[
          {
            pagenumber:1,pagelabel:“DOC-00000001”,path:“Imports\\Media0001\\Images\\0001\\DOC-00000001.tif”
          },
          {
            pagenumber:2,pagelabel:“DOC-00000002”,path:“Imports\\Media0001\\Images\\0001\\DOC-00000002.tif”
          }
        ]
        ,
        contentfiles:[
          {
            path:“Imports\\Media0001\\Natives\\0001\\DOC-00000001.mht”
          }
          ]
        },
        {
          documentId:“DOC-00000003”,
          hash:“6425hyjkasdf5678zxcv1234qwer5678”,
          familyhash:“poui1234asdf5678zxcv1234qwer5678”,
          level:“Imports/Custom/0001”,
          parentId:“DOC-00000001”,
          sourceattachmentaction:InsertUpdate,
          pageaction:InsertUpdate
          mainfields:[
            {
              name:DocumentDate,value:“2019-01-02”,action:Update
          },
          {
            name:DocumentType,value:“Microsoft Word”,action:Update
          },
          {
            name:DocumentTitle,value:“WordDoc.docx”,action:Update
          },
          {
            name:DocumentDescription,value:“Sample description”,action:Update
          },
          {
            name:EstimatedDate,value:“False”,action:Update       
          }
        ],
        fields:[
          {
            name:“Custodian”,onetomany:false,type:PickList,action:InsertUpdate,values:“Custodian A”
          },
          {
            name:“[Meta] File Name”,onetomany:false,type:Text,action:InsertUpdate,values:“WordDoc.docx”
          },
          {
            name:“[Meta] File Path”,onetomany:false,type:Memo,action:InsertUpdate,values:“C:\\Downloads\\Email\\Re: Your message.msg”
          },
          {
            name:“[Meta] File Size”,onetomany:false,type:Number,action:InsertUpdate,values:“74326”
          },
          {
            name:“[Meta] Date Modified”,onetomany:false,type:DateTime,action:InsertUpdate,values:“2019-01-02”
          },
        ],
        pages:[
          {
            pagenumber:1,pagelabel:“DOC-00000003”,path:“Imports\\Media0001\\Natives\\0001\\DOC-00000003.docx”
          }
        ]
      }
    ]
  )
  {
    documentCount
  }
}

Sample response:

{
  “data”: {
    “addDocumentsForImportJob”: {
      “documentCount”: 2
    }
  }
}

Configurable options:

  • documentId: String! imports the Document ID of the document.
  • hash: String imports the individual MD5 hash value of the document. This value is added to the [RT] MD5 Hash field in the case.
  • familyhash: String imports the family MD5 hash value of the document. This value is added to the [RT] Family MD5 Hash field in the case.
  • level: String, when set, overrides any level data set in the job options. Levels are not updated for existing documents.
  • parentId: String is the parent document ID for the document that establishes a source/attachment relationship. The source/attachment relationship is either updated or deleted depending on the value set for sourceattachmentaction.
  • sourceattachmentaction: SAAction determines which of the following actions to take for the parentId field:
    • Delete removes coding from the document for the field.
    • InsertUpdate inserts or updates the value(s) of the field.
  • pageaction: Action determines which of the following actions to take on the pages:
    • Update inserts or updates the value(s) of the field.
    • Delete removes coding from the document for the field.
    • Ignore ignores the value.
  • mainfields: [DocumentFieldParams] imports the following data into core document fields in the case.
    • name: DocumentField! is the name of the document field. The names correspond to the core document fields in the case: DocumentDate, DocumentDescription, DocumentTitle, DocumentType, EstimatedDate.
    • value: String determines which of the following values is populated in the document field.
      • DocumentDate is the Document Date of the document. Format is YYYY-MM-DD.
      • DocumentDescription is the Document Description of the document.
      • DocumentTitle is the Document Title of the document.
      • DocumentType is the Document Type of the document.
      • EstimatedDate is the Estimated Date of the document. A Boolean value.
    • action: CoreAction! determines which of the following actions to take on the incoming field data:
      • Update inserts or updates the value(s) of the field.
      • Delete removes coding from the document for the field.
      • Ignore ignores the value.
  • fields: [FieldParams] imports the following data into fields in the case:
    • name: String! Is the name of the field. If the field exists, the existing field will be used. If not, the name is created with the field type indicated.
    • onetomany: Boolean defines whether the field is one to many.
    • type: FieldType! is the field type. The possible values are as follows:
      • Boolean allows you to set the value as Yes or No.
      • DateTime allows you to set the value in YYYY-MM-DD format.
      • Memo
      • Number
      • PickList
      • Text
    • action: Action! determines which of the following actions to take on the incoming data:
      • Append appends the value(s) to the field (only for one-to-many field types).
      • Delete removes coding from the document for the field.
      • InsertUpdate inserts or updates the value(s) of the field.
    • values: [String]! imports the value(s) for the field.
  • correspondence: [CorrespondenceType] imports the following people and organization values for the document:
    • type: String! determines the correspondence type. Possible values are To, From, CC, or BCC.
    • people: [String] contains a list of people values.
    • orgs: [String] contains a list of organization values.
    • action: Action! determines which of the following actions to take on the incoming field data:
      • Append appends the value(s) to the field (only for one to many field types).
      • Delete removes coding from the document for the field.
      • InsertUpdate inserts or updates the value(s) of the field.
  • pages: [PagesParams]imports the following values for the pages associated with the document:
    • pagenumber: Int! is the page number.
    • pagelabel: String is the page label of the page.
    • path: String! is the location of the physical file to upload.
  • contentfiles: [ContentFileParams] imports the list of content files for the document.
    • path: String! imports the location of the physical file to upload.

Submit an import job

After adding documents to a job using the addDocumentsForImportJob mutation, you can run the import job using the submitImportJob mutation.

Sample mutation:

mutation {
  submitImportJob (
    caseId:26,
    rdxJobId:325
  )
  {
    rpfJobId
  }
}

Sample response:

{
  “data”: {
    “submitImportJob”: {
      “rpfJobId”: 11805
    }
  }
}

SaaS / Hosted Monthly Release Notes - August 2019 (10.0.001 - 10.0.004)

Nuix Ringtail is now Nuix Discover

On August 6, 2019, we are officially changing the name of our eDiscovery application from Nuix Ringtail to Nuix Discover. This new name better describes the unique and powerful capabilities of our award-winning software and better aligns it with the rest of the Nuix product family, most notably Nuix Workstation and Nuix Investigate. Nuix Discover is a central part of the Nuix Total Data Intelligence platform and vision, which promises to improve collaboration, innovation, and knowledge management for organizations around the globe.

Nuix Discover Logo

As a result of renaming Ringtail to Nuix Discover, changes have been implemented in the user interface.

  • After you log in, the navigation bar at the top of all pages displays the Nuix Discover logo and name.
  • Nuix Discover Logo shown at top of all pages
  • The What’s new in Ringtail section on the Portal Home page is now named What’s new.
  • What's new in Ringtail section
  • The What’s new in Ringtail section on the Case Home page is now named What’s new and displays the Nuix Discover logo.
  • What's new in Ringtail section
  • On the user name menu, under User settings, the Reset to Ringtail default menu option is now named Reset to case default.
  • Reset to case default option
  • The Ringtail Connect API Explorer is now named Connect API Explorer.
  • Portal Management - Connect API Explorer option

Track the history for documents viewed and downloaded

In the Document view history dialog box, you can see how many times on a given day that a user viewed or downloaded a document in the View pane or downloaded a document in the Code pane.

To open the Document view history dialog box, in the View pane, select the Document view history option from the View pane menu. Alternatively, if you pinned this option to the View pane toolbar, click the Document view history button.

Document view history dialog box

In the Document view history dialog box, each document viewing event appears on a new row. If a document was downloaded, a dot appears in the Downloaded column.

The downloaded report also includes this information.

Document view history downloaded column

Analysis > Mines: Grant group leaders administrative rights to mines

Administrators can now grant administrative rights to mines to group leaders. Previously, only administrators could manage mines.

To grant group leaders administrative access to mines, on the Security > Administration page, in the Leaders column, set the Analysis – Mines function to Allow, and then click Save.

Security Administration page

Once group leaders have been granted access to manage mines, they can perform the following tasks:

  • Add, delete, rebuild, edit properties, and manage security for mines.
  • Access the Security > Objects page for mines to set the permissions for groups.

Case Setup > Fields > Items: Prompt when renaming or deleting pick list items that are coded to documents

On the Case Setup > Fields > Items page, if you try to delete a pick list item that is coded to a document, the Delete selected items dialog box appears with a warning message. You are prompted to confirm the action before you can proceed.

Delete selected items dialog box

If you try to rename a pick list item, if documents are coded to the item, a warning message appears in the Modify field value dialog box. After you click OK, inline editing is enabled.

Modify field value dialog box

If you try to rename a pick list item, if no documents are coded to the item, inline editing is enabled.

Manage Documents > Ingestions: Add advanced setting for email files

The Email Files page is now available in the Advanced Settings window of the Ingestions feature. On this page, users can select the type of file that is available in the viewer for imported email files.

To select the correct file type, choose an option in the Files to include for email data list. The default type for new and existing cases is MHT. If you select this option, emails are rendered in the same way as before this new setting was introduced. You can also select MSG/EML with attachments. This option includes embedded attachments as part of the email document. Processes such as indexing, imaging, and export include any embedded attachments when acting on the email document.

You can view this setting for an existing imaging job on the Properties page. The Email files value appears at the bottom of the page.

Note: We expect to add an option for MSG/EML without attachments in the future. We also expect to add an option to use the MHT rendering for searching and review while preserving an MSG/EML copy for export and production.

Manage Documents > Ingestions: NIST list updated - June 2019

In the Default Settings window for Ingestions, users can choose to exclude files that appear in the Reference Data Set (RDS) provided by the National Software Reference Library (NSRL), which is maintained by the National Institute of Standards and Technology (NIST). This is commonly referred to as the “NIST list.” Ingestions now uses the most recent version of this list, released in June 2019. You can view the list at the following link: https://www.nist.gov/itl/ssd/software-quality-group/nsrl-download/current-rds-hash-sets.

Portal Management > Cases and Servers: Only system administrators can disconnect a case

On the Portal Management > Cases and Servers page, the Disconnect case button on the toolbar appears only for system administrators. Previously, portal administrators also had access to this feature.

Disconnect case button

Portal Management > Reports: Enhancements to the Portal Summary Report

The following enhancements and changes are available in the Portal Summary Report on the Portal Management > Reports > Summary page:

  • Recent user activity list:
    • The list now shows activity for the last 30 days. The list used to show activity for the last 7 days.
    • For failed logins, the login date and time appear in red font.
    • The Sessions column has been renamed to Cases and shows the number of cases that the user logged in to in the last 30 days. If the user did not log in to any cases, the value is zero (0).
    • Tip: Hover over a row in this column to display a tooltip that shows the case name and the last accessed date for all cases in the organization that the user accessed in the last 30 days. See the following figure for an example.

      Tooltip showing case name and last accessed date

Portal Management > Reports > Summary: New Settings feature to include or exclude data for deleted cases

A new Settings button is available on the toolbar on the Portal Management > Reports > Summary page.

Click the Settings button to open the Settings dialog box. By default, data for deleted cases is excluded from the reports. To display data for deleted cases, select the Include option.

Settings dialog box

If the Exclude option is selected, the data for deleted cases does not appear in the following sections on the Reports > Summary page:

  • The Cases bar chart (Total column)
  • The Recent user activity table
  • The Hosted data table and chart
  • Note: Below the chart in the Hosted data section, a message also indicates if the deleted cases data is included or excluded.

    Hosted data section chart showing if deleted cases data is included or excluded

SaaS / Hosted Monthly Release Notes - July 2019 (9.9.009 – 10.0.000)

Ringtail Connect API: Only an API user can copy their own token and key

If a user has been authorized to use the API, only that user can copy their API token and key.

A new API Access page displays the API token and API key as well as links to copy the token and key. Previously, the API Access page was available in the Portal Management > User Administration section.

Note: The API Access page is visible only if a user has been authorized to use the API.

To access the API Access page, on the Portal Home page, from the user name menu, select Account Settings. On the Account Settings page, in the navigation pane, select API Access.

Account Settings page showing the API Access settings

Ingestions: Settings added to the Properties page

The following information has been added to the Properties page for Ingestions.

  • Source encoding: The source encoding value selected in the Advanced settings window for ingestions.
  • Password bank: If the administrator did not select a password bank for ingestions in the Advanced settings window, the value displayed in this row is No. If the administrator selected a password bank, the value is Yes.
  • Chat settings: The following information is available in this section:
    • Idle time: Threads are broken into separate documents if the difference in sent times between two messages is equal to or greater than this number.
    • Minimum messages: Threads containing fewer messages than this number are not broken into separate documents.
    • Maximum messages: Threads containing more messages than this number are broken into separate documents.

Ingestions: Chat: For documents in split threads, display the message counts for the thread in parentheses

In previous versions, when using Ingestions on chat thread documents, the count of messages at the top of the HTML for chat data was misleading for documents that were part of split threads.

In this version, the count of messages for each participant appears in parentheses after the total count of messages in the thread.

For example, if there are 17 messages from participant A in the thread, and 5 messages from participant A in a document that is part of the thread, the MSG column for participant A would contain a value of 17 (5). If a document is not from a split thread, the count in parentheses does not appear.

SaaS / Hosted Monthly Release Notes - June 2019 (9.9.005 – 9.9.008)

Office Online viewer: Faster document loading

When moving from one document to the next in the List pane or using the Next Document button on the main toolbar, the application starts to load the next Office Online Viewer document, which allows reviewers to review more documents per hour.

Translate feature: Support for CJK target languages

In addition to English, French, German, and Spanish, you can now also translate documents into the following target languages:

  • Chinese (Simplified)
  • Chinese (Traditional)
  • Japanese
  • Korean
Translate selections drop down field

Translation content searches

On the Search page, in addition to English, French, German, and Spanish, you can now also search the translated content of documents using the following search-only fields:

Note: Anyone who has access to the Translate feature, as set by an administrator, can run these searches.

  • Translation Chinese (Simplified)
  • Translation Chinese (Traditional)
  • Translation Japanese
  • Translation Korean

Note: These memo field options are included in the Translations grouping category on theSearch page.

After you run a search, Ringtail returns translated documents that include the content you searched for. Ringtail runs the search against documents that were translated with any available translation service.

Note: The search hits are not highlighted in the translated document.

Translation system fields

The following translations fields are available if an administrator granted access to them.

  • Translation Languages: For each document record, this field indicates which language the document was translated into. Current options include Chinese (Traditional), Chinese (Simplified), English, French, German, Japanese, Korean, and Spanish.

For example, on the Search page, as shown in the following figure, you can search for all documents that were translated into Japanese.

ranslation Search page showing Japanese selection

Coding History: New audited features

Additional information about coding changes now appears in the Coding History pane for the following features:

  • Data Models: Connections changes for entities.
  • Other features: Binders, issues, productions (unlocked only), populations and samples, and annotations.

To view coding history, users must have security enabled for both the Coding History feature and the features for which they want to view audit information.

Coding History: Entity connection changes for data models

The following information appears for connection changes to entity items.

Note: For connection changes to appear, your administrator no longer needs to configure the entity connections as fields. All connection changes are audited.

Connection Changes for Entity Items

The following information is available in the Coding History pane:

  • Field: Entity name.
  • Value: The ID of the entity item it is connected to.
  • Previous value: The ID of the entity item that was disconnected.
  • Action: Linked or unlinked.
  • Date: Date and time of the action.
  • User: User who performed the action.

The following icons appear in the Coding History pane for entities:

  • Linked Linked indicator appears when entity is connected: Appears in the Action column when an entity item is connected.
  • Unlinked Unlinked indicator appears when entity is disconnected: Appears in the Action column when an entity item is disconnected.

Coding History: Binders, issues, productions, populations and samples, and annotations

The following information appears for actions to documents for the following features: binders, issues, productions (unlocked only), populations and samples, and annotations.

Information showing actions to documents

The following information is available in the Coding History pane:

  • Field: Feature type (Annotation, Binder, Population, Production, Issue, Sample). (format)
  • Value: Name of the item that was added.
  • Previous value: Previous name of the item that was deleted, updated, or converted.
  • Action: Whether the item is added, updated, deleted, or converted.
  • Mass coded: The mass coded icon represents if it was a mass action to multiple documents at once.
  • Date: Date and time of the action.
  • User: User who performed the action.

The following icons appear in the Coding History pane:

  • Add Add indicator appears when an item is added: Appears in the Action column when an item is added.
  • Update Update indicator appears when an item is updated: Appears in the Action column when an item is updated.
  • Deleted Delete indicator appears when an item is deleted: Appears in the Action column when an item is deleted.
  • Convert Convert indicator appears when an annotation is converted: Appears in the Action column when an annotation is converted.
  • Mass code Mass code indicator appears when action is applied to multiple documents: Appears in the Mass Coded column when the action is applied to multiple documents.

Coding History: Warning message for audit history updates

If the coding history is still being updated, a warning message appears in red at the top of the page.

Coding history in progress warning message

The user should wait and check back later for the latest coding history updates or see their system administrator if the message persists.

Data Models: View fields for directly or indirectly connected entities

If your administrator has added fields to directly or indirectly connected entities, you can view the details in the Conditional Coding pane, which allows you, for example, to see all of the references at the document level.

Conditional Coding pane showing document level references

Data Models: Add fields to directly or indirectly connected entities

You can now add fields for directly or indirectly connected entities on the Conditional Templates page for an entity.

Note: You can add fields only to an item with a singular connection to another item.

When you open the Conditional Templates page for an entity, a new menu that includes the connected data model entity types is now available. The default entity type for the template appears on this menu.

On the second menu, the list of fields is filtered to the fields for the selected entity type.

To add a field to an indirectly connected entity, select a field from the menu, and then click the Add field button.

Adding fields indirectly connected to an entity

The field is added and connected. Indirectly connected fields are read-only.

Note: The read-only setting for indirectly connected fields cannot be changed.

Indirectly connected field read-only setting

File management: New password protect option for downloaded files

You can now add a password to protect and encrypt your .zip file before you download it from the Manage Documents > Imaging – Manual page. If you add a password to the file, whomever opens the downloaded .zip file will need to enter the password you created in order to open the file(s).

Imaging - Manual page Download dialog

Note: The Password protect the file check box is cleared by default. If you want to enter a password, you must select the check box first.

Ingestions: Customize fields imported from Nuix

Not all available ingestions fields are needed in all cases. When creating ingestions default settings, administrators can now unselect unnecessary fields on the Customize Fields page, which is available in the Advanced Settings window of the Ingestions feature. Deselecting fields can save processing and import time and reduces the size of the database tables.

On the Customize Fields page, the selected fields are included in ingestions jobs. By default, most fields are selected. When cloning a case, these selections are also cloned.

Select or clear the checkboxes as needed to add or remove fields. You can also hover over the field name to see a description of the purpose of the field.

  • The following fields cannot be unselected:
    • Custodian
    • Document Date
    • Document Type
    • Evidence Job ID
    • [Meta] File Extension - Loaded
    • [Meta] Processing Exceptions
    • [RT] DPM File ID
    • [RT] Ingestion Exception Detail
    • [RT] MD5 Hash

Ingestions: Collect audio and video duration information

You can now collect information about the duration of ingested audio and video files, which allows you to predict the cost of audio and video transcription. The field that contains this information, [Meta] Multimedia Duration, is selected by default on the Ingestions > Advanced Settings > Customize Fields page. Duration is captured in minutes.

Ingestions: Collect image coordinate data

You can now collect coordinate data from ingested photos. The fields that contain this information, [Meta] Latitude and [Meta] Longitude, are selected by default on the Ingestions > Advanced Settings > Customize Fields page. The [Meta] Latitude value is a latitudinal geographic coordinate expressed in decimal degrees. The [Meta] Longitude value is a longitudinal geographic coordinate expressed in decimal degrees. Coordinate data is typically found in image files generated by a camera.

Ingestions: Collect extended file path information

You can now include the extended path to original files in ingestions jobs, including the folders that were mapped in the Ingestions settings. The field that contains this information, [Meta] Extended File Path, is available on the Ingestions > Advanced Settings > Customize Fields page. It is unselected by default. This field is similar to [Meta] File Path. In most cases, only one of the file path fields is needed.

User Administration: Configure an identity provider: Add a saml_cert line

When configuring an identity provider in User Administration > Identity Provider Settings, if the Provider name is SAML, which stands for Security Assertion Markup Language, the configuration must include a line for saml_cert. An example line is below, where <signing cert> is the SAML signing certificate from the identity provider.

monospace“saml_cert”: “<signing cert>”

An example of this in Ringtail is shown in the following figure.

Identity Provider Settings - Properties dialog showing the SAML cert

Note: Previously configured identity providers using SAML that do not have a saml_cert setting will no longer work after upgrading to this version of the login service.

Ringtail Connect API: New codeField mutation

The new codeField mutation allows you to apply the coding action (add, update, delete or save a field value) to a specified document in a case. You can use the mutation to code a text field, date field, number field, memo field, Boolean field, or pick list.

You must have access to the case and document, and write permission to the field, or an error is returned. An error is also returned if an invalid caseId, mainId, or fieldId is specified.

The following example updates the field 10406-8 with a value of 18772.

mutation {
  codeField(input: {caseId: 213, action: Save, mainIds: [1, 2, 3], fieldId: “10406-18”, value: “18772”}) {
    fieldId
    updatedCount
    totalCodedCount
    deletedCount
    notChangedCount
    changes {
      mainId
      result
      value
    }
  }

Ringtail Connect API: New fieldCodeUpdateSpecific and fieldCodeDeleteSpecific mutations

Use the new fieldCodeUpdateSpecific and fieldCodeDeleteSpecific mutations to update or delete coding values for specific fields.

For fieldCodeUpdateSpecific, you must specify a coded value to replace and the new value to replace it with, as shown in the following example. For a one-to-many field, only the specified value is replaced. Other values coded to the document are left as they were.

Like the codeField mutation, you must have access to the case and document, write permission to the field, and valid values for caseId, mainId, or fieldId.

mutation {
  fieldCodeUpdateSpecific(input: {caseId: 49, mainIds: [6], fieldId: “10007-18”, newValue: “29”, existingValue: “32”}) {
    fieldId
    insertedCount
    updatedCount
    totalCodedCount
    deletedCount
    notChangedCount
    changes {
      mainId
      result
      value
    }
  }
}

For fieldCodeDeleteSpecific, you must specify the value to delete. The following example deletes the value updated in the fieldCodeUpdateSpecific example.

mutation {
  fieldCodeDeleteSpecific(input: {caseId: 10, mainIds: [6], fieldId: “100007-18”, existingValue: “32”}) {
    fieldId
    insertedCount
    updatedCount
    totalCodedCount
    deletedCount
    notChangedCount
    changes {
      mainId
      result
      value
    }
  }
}

Ringtail Connect API: History of changes user enabled or disabled for portal-level UI extension

Use the affectedUser field to query results for a user who was enabled or disabled for a portal-level UI extension.

{
  extensions(id: 7) {
    id
    name
    url
    audit(startDate: “2018-05-01”, endDate: “2019-05-29”) {
      case {
        name
      }
      organization {
        name
      }
      affectedUser {
        fullName
      }
      isEnabled
      date
      user {
        fullName
      }
    }
  }
}

SaaS / Hosted Monthly Release Notes - May 2019 (9.9.001 – 9.9.004)

Data models: Access Data Model Entities from the Browse pane

You can now access data model entities from the Browse pane.

To add the Data Model Entities section to the Browse pane, on the toolbar in the Browse pane, click Options. In the Browse settings dialog box, select the checkbox next to Data Model Entities, and then click Save. The Data Model Entities section appears in the Browse pane.

Data Model Entities section appearing in Browse Pane

Updates to the Conditional Coding feature

  • The rendering performance of user templates is improved.
  • The following new default templates are now available in the Conditional Coding pane:
    • All Values
    • Production - Unlocked: Use this template to quickly view and add or remove individual or multiple documents to and from an unlocked production.
    • Conditional Coding selection of Production Unlocked

Portal UI extensions

The UI extensions feature allows administrators and service providers to extend the functionality of the application by embedding third-party web applications directly into the interface. A third-party web application that loads within the application is called a UI extension.

If your administrator configured portal UI extensions for your environment and granted you access to those extensions, you will see the extensions under Portal Extensions on the Portal Home page.

Portal Home page showing Portal Extensions

Click the name of an extension to open it on the Portal Extensions page.

Portal Extensions Page

Updated logic for adding an “i” suffix when exporting documents from search results

In previous releases, when users exported both images and native files from search results, if Ringtail identified the same file as both the image and the native file for a document, Ringtail would assign the image an “i” suffix. In this release, Ringtail assigns an “i” suffix to an image file only if it is actually different from the native file. In other words, this happens if the image file contains footers or annotations that the native file does not. If annotations are applied but the native file is omitted from the export (based on settings), Ringtail will not assign the “i” suffix to the image file.

In addition, if the option to embed text in a PDF is selected, the PDF is both the image and the native, and if no footers or annotations are applied, then Ringtail will export only the searchable PDF (with no “i” suffix). In this instance, only one PDF is exported with an MDB load file. The same PDF exists in the image and native folder for non-MDB exports.

Note: This functionality is available only to administrators. It is also related only to the custom export type when exporting from search results.

File Management: New password protect option for downloaded zip files

You can now add a password to protect and encrypt your .zip file before you download it. If you add a password to the file, whomever opens the downloaded .zip file will need to enter the password you created in order to open the file(s). This option is now available in the Manage Documents section in both Exports and File Repositories.

On the Manage Documents > Exports page or the Manage Documents > File Repositories page, select the file or folder you want to download. On the toolbar, click Download files. The Download files dialog box appears.

Note: If you select the Archive repository on the File Repositories page, the password option is hidden from the Download files dialog box. Because the file archives are already in .zip files, the system does not recompress them into new .zip files.

Download Files dialog

Note: The Password protect the file check box is cleared by default. If you want to enter a password, you must select the check box first.

In the Download files dialog box, you can select the Password protect the file check box and enter a password. When you click OK, a download window appears.

After your .zip file is downloaded, you can view it either from the download window from your File Explorer. When you open the file, the Password needed dialog box appears.

Password Needed dialog

You must enter the password you created in the Password field in the Download files dialog box, and then click OK in this dialog box to open the encrypted file.

Ingestions: Allow users to view RPF output XML while task is in progress

Administrators can now view XML task output while an Ingestions processing task is in progress. When an administrator opens a task on the portal Processing > Jobs page, the task output XML appears on the XML page under Task Output. This information is based on the job progress at the time.

This update makes it easier to troubleshoot tasks that appear to be “stalled.” This update is available for both Windows and Linux systems.

Ingestions: Add advanced setting for source encoding

The Encoding page is now available in the Advanced Settings window of the Ingestions feature. On this page, users can select the source encoding for a case.

Advanced Settings window of Ingestions feature

Ingestions automatically detects the encoding for many files. When the encoding is not known, Ingestions uses the encoding selected in this setting.

On the Encoding page, select the correct encoding type in the Source encoding list. The default type for new and existing cases is windows-1252. Cloned cases will retain the setting from the clone source.

Add UI extensions to the Portal Home page

The UI extensions feature allows administrators and service providers to extend the functionality of the application by embedding third-party web applications directly into the interface. A third-party web application that loads within the application is called a UI extension.

System administrators and portal administrators can add and enable user interface extensions (UI extensions) on the Portal Home page. System administrators can add extensions for any organization or user, and portal administrators can extensions for the organizations and users that they manage.

Previously, UI extensions were available only for the Case Home page and for Workspace panes.

Note: For general information about UI extensions, see the online help on ringtail.com. The current UI extensions topic includes information about how to add and enable UI extensions for the Case Home page and for Workspace panes. The process to add UI extensions to the Portal Home page is similar and will be added to the online help in a future release.

The high-level workflow is as follows:

  • Add a portal UI extension (manually or using a manifest file).
  • Enable a portal UI extension for organizations.
  • Enable a portal UI extension for users.
  • View the portal UI extension on the Portal Home page.

Use the following high-level procedure to add and enable a portal UI extension.

  1. On the Portal Home page, under Portal Management, click UI Extensions.
  2. On the UI Extensions page, click Add.
  3. UI Extensions page Add button
  4. To add a portal UI extension manually, in the Add UI extension window, on the Settings page, use the Basic Settings editor. In the Location list, select Portal home page. Provide all required information, and then click Next. On the Review page, review the information, and then click Save.
  5. Note: The default location is Workspace pane.

    Add UI extension Basic Settings Editor page
  6. To add a portal UI extension using a manifest file, do the following:
    • On the Settings page, in the Basic Settings editor, in the Name box, provide a name for the UI extension.
    • In the Location list, select Portal Home page.
    • Under Settings editor, select Advanced.
    • Add UI extension when selecting Advanced
    • On the Settings page, in the Advanced Settings editor, do one of the following:
      • To upload a manifest file, click Browse.
      • Or, type the information in the box.
      Add UI extension showing Advanced options
    • Click Next.
    • On the Review page, review the information, and then click Save.
    • Add UI extension Review page
    • The next step is to enable the UI extension for organizations.
  7. On the Portal Home page, under Portal Management, click UI Extensions, and then click the name of a UI extension. You may need to refresh the page.
  8. Note: Portal UI extensions have brown icons, as shown in the following figure.

    UI Extensions name selection
  9. On the Properties page, review or modify the properties.
  10. Note: On the Properties page, you can also change the Location of an existing UI extension.

    UI Extensions showing Properties page
  11. On the Organizations page, enable the UI extension for one or more organizations.
  12. UI Extensions showing Organizations page
  13. On the Users page, enable the UI extension for one or more users.
  14. UI Extensions showing Users page
  15. The next time that a user refreshes the Portal Home page, the portal UI extension appears on the Portal Home page under Portal Extensions. Click the name of the extension to open it on the Portal Extensions page.
  16. Portal Home page when selecting Portal Extensions name
    Portal Extensions page with selected Organization

Set User audit migration service URL to view Coding History page

To enable access to the Portal Management > Cases and Servers > Coding History page, you must provide a valid URL for the User audit migration service URL portal option on the Portal Management > Settings > Portal Options page, as shown in the following figure.

Portal Options found under Portal Management page Settings option

Updated version number for installers

The version number of the Ringtail installers now includes a fourth segment, which makes it easier to differentiate newer installers from older ones. The fourth segment, previously a hash value, now starts numbering at 0 and increments by one for new service packs within the same weekly release.

For example, for the installer Ringtail9-DatabaseUtlity_9.9.4.0, the installer name for the first service pack for that weekly release is Ringtail9-DatabaseUtlity_9.9.4.1.

For versioned installers such as the Ringtail web installer, the four-segment number is in parentheses. For example, Ringtail9-Web_v9.9.004 (9.9.4.0).

After running installers, the fourth segment of the version number is also visible in the Windows Programs and Features, or Apps & features lists.

SaaS / Hosted Monthly Release Notes - April 2019 (9.8.009 – 9.9.000)

List pane: Options menu always visible

In the List pane, the Options menu is now always visible.

List pane showing the Options menu.

Previously, as shown in the following figure, you had to hover over the column next to the check box before the Options menu appeared.

List pane showing the Options menu.

Quick Search box: Search for Universal IDs

If you are working with data models, you can now use the Universal ID option in the Quick Search box on the Documents page to search for specific entity IDs or document IDs in a data model. Universal IDs can include entity IDs, document IDs, or rendition IDs.

On the Quick Search menu, click the gear button to display the menu, and then select Universal ID.

Quick search box menu showing the Universal ID option.

In the Quick Search box, type the name of a specific entity ID, fo example, CUSTODIAN-00803.

Note: The name of the entity ID is case sensitive.

Quick search box showing a search for a Universal ID (CUSTODIAN-00803).

Coding History: Track the history of connection changes for entities

In the Coding History pane, you can now track the history of connection changes for entities.

List pane and Coding History pane.

Note: Your adminstrator must configure the entities connections as fields.

Related pane: Connect existing entity item IDs

You can now connect multiple entity items to an active entity item by entering a list of entity item IDs to connect.

Use the following procedure to connect multiple entity items to an active entity item.

  1. In the List pane, select an entity item.
  2. In the Related pane, on the toolbar, click the Connect existing entity item IDs button. Related pane showing the Connect existing entity item IDs button.
  3. In the Connect existing entity item IDs dialog box, enter the entity item IDs.

    Connect existing entity item IDs dialog box.

    Note: Each line must contain a single existing entity item ID. You can paste a list of entity item IDs as long as each entity item ID starts on a new line.

  4. Click OK.

    The items found for that entity item type are linked to the active item. The linked items appear in the Related pane for the entity item type.

    Related pane showing the connected entity item IDs.
Related pane: Add Person or Organization entity

You can now add Person or Organization entity item in two places.

Note: This example describes the procedure for a Person entity.

Option 1

  1. Return results for a Person entity.
  2. In the List pane, click the Add [name of entity] item button on the toolbar.
  3. In the Add [name of entity] item dialog box, provide a name, and then click OK.

    List pane showing the Add [name of entity] item button.

The Person entity item is added.

Option 2

  1. If a Document entity has a connection to a Person or Organization entity, do the following:
  2. In the Related pane, on the Related Entities tab, on the toolbar, select the arrow next to the Add [name of entity] item button.
  3. Then, select one of the options on the menu. Related pane showing the Add [name of entity] item button with the Person menu. The Person menu includes the following items: Person-From, Person-To, Person-CC, Person-BCC.
  4. In the Add [name of entity] item dialog box, provide a name, and then click OK.

    Add [name of entity] item dialog box.

    The Person entity item is added.

    Related pane showing the added entity item.

Case Setup › Ringtail Data Models: Custom entity item ID

When you add an entity to a data model, you can now select the option to allows users to create a Custom entity item ID.

Add entity to data model showing the Custom entity item ID option.

This allows users to add a custom entity item ID when adding entity items in the List or Related panes.

Add [name of entity] item dialog box showing a custom entity item ID.

Case Setup › Ringtail Data Models: Type pick list created for new data model entities

When you create a new entity in a data model on the Case Setup › Data Models › [Name of data model] page, a type pick list is created for the entity that you created.

On the Fields page for the entity that you created, you can view and modify pick list items.

Fields page for an entity.

File Repositories: Archive repository updates

In order to process large archive files more efficiently, Ringtail now batches those archive files into multiple .zip files. The following feature updates support the processing improvements.

  • The following changes appear in the Archive repository list on the Manage Documents › File Repositories page:
    • The Type column is no longer on the page.
    • Each row represents one archive, regardless of whether multiple .zip files are generated when the archive is created.
  • In the Archive dialog box, which can be accessed on the Manage Documents › File Repositories page and on the Documents page › Tools menu, the .zip file extension is no longer a part of the Destination name, as shown in the following figure. The destination path now consists of the archive repository name and the output folder, which is named with a date/time stamp in the format YYYY-MM-DD-HH-MM-SS.

    Archive dialog box.

  • When you select an archive and click Download, all .zip files in the archive are downloaded. The selected items count is the number of archives (rows) selected on the page.

    Download files dialog box.

    When you click View size, the total number of .zip files for all selected archives appears.

  • The Archive File Name system field was renamed to Archive Name.

Ingestions: New Password Bank page in Advanced Settings

The Password Bank page is now available in the Advanced Settings window of the Ingestions feature. On this page, users can submit a list of known passwords for a case. Ringtail attempts to decrypt any encrypted files using those passwords. On the Manage Documents › Ingestions › Advanced Settings › Password Bank page, select the Use the password bank to decrypt the files check box. Ingestions uses the passwords in the bank to attempt to decrypt any encrypted files with the following file types:

  • Microsoft Office 2010+ (.docx, .xlsx, .pptx),
  • Microsoft Office pre-2010 (.doc, .xls, .ppt),
  • Adobe PDF documents (.pdf),
  • Zip archives (.zip),
  • 7Zip archives (.7z)
  • Bitlocker

Note: Decryption adds approximately one minute of processing time for every 200 passwords attempted per encrypted file.

Under Upload, you can upload passwords in a plain text file in .txt format. The .txt file must contain one password per line.

Note: By default, if you upload a .txt file when existing passwords are already present, Ringtail adds new, unique passwords to the bank. If you select the Overwrite all previous passwords option, Ringtail overwrites all existing passwords.

To download a .txt file of the existing password bank, click Download password bank file. The file name for the .txt file is in the following format: "PasswordBank_{date/time}.txt."

To determine if a password file has been uploaded for an ingestions job, open the job’s properties page. In the Ingestion Details row. The value is Yes if passwords were applied for that job, and No if passwords were not applied.

Ingestions: Split PST files to improve processing capacity and performance

When a .pst file is identified during file inventory, ingestions now splits that .pst into smaller files before processing. Splitting .pst files alleviates issues with large files sizes in non-AWS environments, and allows large .pst files to be distributed across more workers, which improves processing throughput.

This functionality is only available if you select the Enable Linux/Docker Ingestions case option. In a future release, the process to split .pst files will run regardless of this setting.

Ingestions reporting reflects the original file rather than the multiple split files. All split files are represented as a single original file with an aggregated document count. File path metadata reflects the path of the original file.

Note: In Ringtail 9.9.001, the Ingestions container split size case option will allow you to set a minimum size, in gigabytes, for splitting a .pst file. Only .pst files larger than the size indicated will be split. The default setting for this option is 8 gigabytes. If you set the option to 0, Ringtail will not split .pst files.

Ingestions: Group multi-segment FileSafe files

When FileSafe files are submitted for processing, the Ingestions feature groups any multi-segment FileSafe files into the same batch.

The following are example extension names of FileSafe files:

  • .mfs01
  • .mfs02
  • .mfs99
  • .mfs100
  • .mfs101

Imaging: Support added for EMF and EML files

Imaging now supports files with .emf and .eml file extensions.

In previous versions, the .eml file extension was included in the default setting for documents not to be imaged. New cases will no longer include the .eml file extension in this setting. However, in existing cases, users wishing to process .eml files must remove the .eml extension from the Extensions list on the Manage Documents › Imaging-automated › Settings › Common page.

Introducing the case decommission feature

Administrators can use the case decommission feature to remove a case and its associated files from a Ringtail portal. The following sections describe the updates that comprise the case decommission feature.

Delete button renamed to Disconnect case on Cases page

On the portal-level Cases and Servers › Cases page, the Delete button was renamed to Disconnect case and now appears next to the Connect to case button.

Disconnect case button location.

Decommission a case

The new Decommission button appears on the portal-level Cases and Servers › Cases page, on the More menu.

To decommission a case, select the check box next to the case name and select More › Decommission. The Decommission case window opens. The Summary page displays the case name, hosted size (if available), and the file repositories assigned to the case. The summary does not include external file repositories.

A message appears at the top of the page that informs you that if you decommission this case, the case and all associated files are permanently deleted with no backup. This includes all case databases, and all files from each of the listed file repositories.

To decommission the case, select the Delete this case and all associated files check box, and then click OK.

Decommission case window.

Ringtail then initiates an RPF job with the following stages:

  • A case metrics job runs as the first stage before the case is deleted.
  • The case is taken offline, any scheduled or running jobs for the case are disabled, and users can no longer access the case.
  • A separate browser window opens to show the progress of the delete operation. Closing the window does not affect the RPF job.
Processing window for case decommission.

You can also monitor the progress of the RPF job stages in the Progress column on the portal-level Cases and Servers › Cases page.

Note: For failed jobs, you can select the case on the Cases page and click Resubmit.

When the deletion is complete, the case appears on the portal-level Cases and Servers › Deleted Cases page and no longer appears on the Cases page.

Deleted Cases page

To open the Deleted Cases page, on the Portal Home page, click Cases and Servers, and then click Deleted Cases.

The Deleted Cases page displays all cases that were deleted from the portal. The page also includes information about the user who deleted the case, the case deletion date, the case creation date, and the associated organization.

If organization security is enabled, the list of available cases for portal administrators depends on membership in a provider or a client organization. System administrators can view all deleted cases.

Deleted Cases page.

Reporting for decommissioned cases

The number of days to display deleted case information is determined by the Days to display deleted cases in portal reports portal option, shown in the following figure. The default number of days is 60.

Portal option for deleted case reporting.

Deleted cases appear on the portal Reports pages.

  • On the Reports › Summary page, Ringtail displays the deleted case information in the following ways:
    • Deleted cases are included in the Total case count field, and not included in the Active case count field.
    • Data for deleted cases is included in the Hosted data (GB) counts field.
    • On the Reports › Usage, Hosted Details, and Users pages:
      • The Status column displays the Deleted case icon for the case. As shown in the following figure, the tool tip for the icon shows the date of the deletion. Deleted case status symbol.
      • You can use the column filter to include or filter out deleted cases from the reports that appear on this page. Deleted cases are filtered out by default.
    • If you download a report, the Status column in the downloaded report shows the date of the deletion. The data in the columns is up to date based on the case metrics job that was run prior to case deletion.

What’s new for developers

The following new or updated features are available to developers.

Ringtail Connect API: API updates for case decommission

Use the caseDecommissionStatus field to request the decommission status for a case.

  • All returns the decommissioned status for all cases.
  • Deleted is returned for decommissioned cases that are deleted.
  • Archived is returned for decommissioned cases that are archived.
  • Live is returned for active or inactive cases that are not decommissioned.

The following sample query returns the name and decommission status of all cases in the portal.

{
  cases {
    name
    caseDecommissionStatus
  }
}

After a case is decommissioned, you can use the cases query to get data about deleted cases such as the case creation date, decommission date, status, and user, as shown in the following example.

query cases_filtered {
  cases(decommissionStatus:Deleted) {
    name
    active
    caseCreatedDate
    caseDecommissionStatus
    caseDecommissionedBy
 caseDecommissionedDate
  }
}