SaaS / Hosted Monthly Release Notes - May 2020 (10.3.001 - 10.3.004)

Renumbering: Change the Document IDs and leveling of documents

You can now use the Renumbering tool in Nuix Discover to change the Document ID format of documents. Once you have specified a Document ID format, the application images the selected documents and converts them to a PDF format, then applies the specified numbering rules, relevels the documents to match the new Document IDs, applies endorsements to the PDF image, and replaces the PDF image with the endorsed version. You can view the endorsed PDF images in the Image viewer in Nuix Discover.

Renumber documents

You can renumber imported documents using the Renumbering option on the Tools menu, shown in the following figure.

Caution: Do not run multiple simultaneous Renumbering jobs with the same Document ID prefix in a case.

Select one or more documents to enable this tool.

Renumbering document selection list

On the Exclusions page, shown in the following figure, you can determine the following information:

  • Types of files to include and exclude in your renumbering.
  • If the native file should still appear in the Image viewer in Nuix Discover.
  • How the application will handle documents that fail to image to PDF.
Renumbering > Exclusions page

On the Slipsheets page, shown in the following figure, you can select which files to insert slipsheets for, and use the variable builder to determine what text appears on the slipsheets.

Renumbering > Slipsheets page

On the Document ID page, shown in the following figure, you can determine the following information:

  • The format for how the files will be renumbered. You can select a format that includes a prefix, box, folder, page, and delimiter, or you can select a format that includes only a prefix and padding.
  • If document families must stay together in a folder after renumbering.
  • If levels should be updated to correspond to the new numbering. This option is only available if you select the Prefix, Box, Folder, or Page format.
Renumbering Document ID page

On the Endorsement page, shown in the following figure, you can determine what information goes in the header and footer of renamed documents.

Renumbering Endorsement page

Translate: Propagate translated text to duplicate documents

When you submit a document for translation, the translated text is propagated across all duplicates of the document, so that you do not have to translate each duplicate document individually.

Note: Documents with branded redactions are not translated.

Also, the following applies to the translated duplicate documents:

  • The Translation Language system field is coded with the same target language as the translated document.
  • The Translation Status system field is coded with the same value as the translated document.

Renumbering: Enable the renumbering feature

On the Security > Features page, administrators can enable the renumbering feature using the Processing - Renumbering option.

Renumbering: Enable renumbering system fields

On the Case Setup > System Fields page, administrators can make renumbering-related system fields available to users. If the renumbering system fields are enabled, users can search for the fields and display the fields as columns in the List pane.

The following renumbering system fields are available:

  • Renumbering Status
  • Renumbering Previous Document ID
  • Renumbering ID

Renumbering: View renumbering job properties

On the Manage Documents > Renumbering page, administrators can view the properties and progress of renumbering jobs. Click a renumbering job in the list to view the properties or errors for the job.

Note: Administrators can allow Group Members and Group Leaders to access the Manage Documents > Renumbering page. On the Security > Administration page, in the Leaders or Members columns, set the Manage Documents – Renumbering Management function to Allow, and then click Save.

Exports: Option to include blank text files

For custom export types (base or renditions), a new option is available in the Export window to include a blank .txt file for all documents in the export that are missing a .txt file. For base documents, the option is available on the File types page in the Settings window (available when you click the Settings button, or gear).

Option for base document export:

Export window Endorseable Image files options

For rendition documents, the option is available on the File types page.

Option for document export:

Export (Renditions) File types page

If you select this option, along with the option to export content files, the application exports a blank .txt file for documents without an existing .txt file or associated extracted text. For base documents, the application names the .txt file according to the document ID. For rendition documents, the application names the .txt file according to the production document label for renditions. The blank .txt files are referenced in any load files that have a field for the text file name.

Note: When exporting base documents, if the application excludes any .txt files from an export because of annotations, a blank .txt file is not exported for those documents. The option to omit text files if a document is annotated is on the Annotations page in the Settings window (available when you click the Settings button, or gear).

To help administrators easily identify documents for which blank .txt files were exported, the following message appears on the Warnings page of the export job: “A blank content file (.txt) was exported because no content/.txt file was found for a document.”

Imaging: Add time zone setting for email file conversion

Administrators can now select a time zone for rendering native email files into images. The Time zone option is available in the Manage Documents > Imaging-Automated > Settings window on the Email and Website page. Administrators can select Use ingestions default or a specific time zone. If the administrator selects Use ingestions default, the application uses the time zone set in the default settings for Ingestions.

Imports: Prevent the creation of a new field with the same name as a system field

In the Import settings window, on the Field Map page, if a user creates a new field with the same name as an existing system field but of a different type, the application does not allow the user to continue. The field is outlined in red, and the following message appears: "New field cannot match an existing system field's name."

Processing > Index Status: Only document entities are included in the index status counts

On the Portal Management > Processing > Index Status page, shown in the following figure, only document entity items are included in the indexing counts in the Documents columns (Total, Indexed, Waiting, Excluded, Failed). Non-document entity items are not captured.

Portal Management > Processing > Index Status page

Organizations: Schedule daily case metrics jobs

System administrators can now schedule daily case metrics jobs for organizations and all cases in those organizations.

Note: This feature is not available to portal administrators.

Use the following procedure to schedule a daily case metrics job for an organization.

  1. On the Portal Management > Organizations page, on the toolbar, click the Case metrics button.

    The Case metrics settings dialog box appears.

  2. In the Case metrics settings dialog box, shown in the following figure, in the Time list, select a time.

    Note: The time appears in the user’s local time.

  3. Select one or more organizations.

    Note: To select all organizations, select the blue checkmark, shown in the following figure.

  4. Click Save.
    Case metrics settings dialog box

    The jobs are scheduled to run daily, at the time you selected. The newly scheduled jobs are added to all existing cases for the selected organization or organizations. For cases that are added to an organization after the job has been scheduled, the settings for the organization apply.

    Note: These settings do not override previously scheduled jobs.

Use the following procedure to cancel a daily case metrics job.

  1. Open the Case metrics settings dialog box.
  2. Clear the check box for the selected organization or organizations.
  3. Click Save.

After you schedule a daily case metrics job, in the table on the Portal Management > Organizations page, an icon in the second column indicates if a daily case metrics job is scheduled for an organization, as shown in the following figure.

Note: This column is visible only to system administrators.

Portal Management > Organizations page

Once the daily case metrics job is complete, the values in the following columns are updated on the Portal Management > Reports > Hosted Details page:

  • Base documents (GB)
  • Production renditions (GB)
  • Databases (GB)
  • Elasticsearch index (GB)
  • Content index (GB)
  • Predict (GB)

The values in the following columns are not updated as part of a daily case metrics job. Rather, the values in these columns reflect the values from the last Gather case metrics job that was run:

  • Orphan (GB)
  • File transfer data (GB)
  • Archive data (GB)
  • Missing (GB)

To update the values for these columns, you must run a full Gather case metrics job on the Portal Management > Processing > Jobs page.

Connect API Explorer: Assign users to case groups using the userGroupAssign mutation

The Connect API Explorer userGroupAssign mutation allows you to easily assign users to case groups for easy management of case access. This mutation allows you to perform multiple assignments simultaneously by pairing a single userId to a groupId, or multiple userIds to a groupId. Only the userId field allows this many-to-one assignment format. All other fields can only assign in a one-to-one format.

When assigning a user to a group and the user has an existing assignment to that group, the notChangedCount will increase by the appropriate number.

Required fields:

  • userId
  • caseId
  • groupId

Sample mutation:

mutation {
  userGroupAssign(input: [
    { userId: [7,9,10,11], groupId: 13, caseId: 8 },
    { userId: 8, groupId: 13, caseId: 4 }
  ]) {
    totalCount
    successCount
    errorCount
    notChangedCount
  }
}

Connect API Explorer: Update organization settings using the organizationUpdate mutation

The Connect API Explorer organizationUpdate mutation gives system and portal administrators the ability to update organization settings to help manage the organizations within the application.

Required fields:

  • organizationId: Integer, identifies the organization in the portal.

Optional fields:

  • name: String, organization name in the portal.
  • accountNumber: String, account number of the organization being modified.
  • caseId: Integer, identifies the default template case for the organization in the portal.

Sample mutation:

mutation {
  organizationUpdate(input: [
    {organizationId: 4, name: ABC Corp, accountNumber: 87597117},
    {organizationId: 6, name: XYZ Corp, caseId: 10}
  ])
    {
    totalCount
    successCount
    errorCount
    notChangedCount
  }
}

Connect API Explorer: Unassign users from case groups using the userGroupUnassign mutation

The Connect API Explorer userGroupUnassign mutation allows the ability to unassign a user from a case group to more thoroughly manage case access. Portal Administrators, who are assigned to a case, can unassign Portal Users and other Portal Administrators from the groups in that case.

Required fields:

  • userId: Integer, identifies the user in the portal.
  • caseId: Integer, identifies the case in the portal.
  • groupId: Integer, identifies the user group in the case.

Sample mutation:

mutation {
  userGroupUnassign(input: [
    {userId: [7,9,10,11], groupId: 13, caseId: 8},
    {userId: 8, groupId: 13, caseId: 4}
  ])
  {
    totalCount
    successCount
    errorCount
    notChangedCount
  }
}

SaaS / Hosted Monthly Release Notes - April 2020 (10.2.009 - 10.3.000)

Introducing the Memo Editor pane

Nuix Discover now has a new Memo Editor pane, as shown in the following figure. This new pane is available for both documents and entities. It contains the existing Memo Editor formatting capabilities, as well as a new and quicker way of creating and removing links, and a new feature for downloading memos in Hypertext Markup Language (HTML) format.

Memo Editor pane

Note: The Memo Editor pane does not replace the existing editor capability within memo fields.

The following list provides an overview of the features available in the Memo Editor pane:

  • Memo field selection: To switch between the active memo fields, click the drop-down list on the toolbar and select a field. You can select the Comments, Timelines Description, [Meta] Chat HTML, and Document Description fields, as well as many others.

    Note: From the memo fields drop-down list, the Memo Editor pane allows you to access only the one-to-one memo fields.

  • Hyperlinks: Creating hyperlinks to documents, binders, transcripts, or other data takes fewer mouse clicks using the Memo Editor pane. Hyperlinking is also available for both documents and entities. In previous releases, you could not hyperlink to entities.
    • When you enter text to create a link and either double-click or highlight the text, an inline menu appears that contains the Document link, Object link, Transcript link, Web link and Remove link options, as shown in the following figure.
      Memo Editor inline menu

      After selecting a link option, a dialog box appears that allows you to search for and to select the link data.

      Note: The inline menu for linking replaces the Link toolbar button in the existing memo capability.

    • To view link contents, each link contains a tooltip that appears when you point to an existing link, as shown in the following figure.
      Link contents tooltip
    • Linked content opens when you hold down the Ctrl key on the keyboard and click the mouse.
    • To remove a link, double-click the link and select Remove link.

      Note: You cannot edit the text in existing links. You must first remove the link, then correct spelling errors or other mistakes made in the link text.

  • Auto-search linking: The Mentions feature allows you to do a quick search for document links.
    • When you type the hash (#) sign, followed by six or more characters of a Document ID, an inline list appears with matching search results, as shown in the following figure.
      Memo Editor inline list for entering document links

      Select an item from this list to automatically create a link to the selected document and insert the link into the memo.

      Tip: You cannot create a link back to an active document.

  • Downloading: The Download button allows you to export memos to HTML, as shown in the following figure.
    Memo Editor export sameple of the memo

    The top portion of the HTML file shows general information such as the case, user, date downloaded, and other information. Memo text follows.

    • If the memo contains links, you can view the link contents in the same manner as in the Memo Editor pane. However, because transcript links are embedded data and do not have an associated URL, they do not open from the downloaded HTML file. They open only from the Memo Editor pane.

      Note: If you have not previously logged into Nuix Discover, the login page appears before opening the linked document.

Search page option added to the Case Home menu

You can now access the Search page from the Case Home menu, as shown in the following figure.

Case Home Search page

Default start page for a group

Your administrator can now define the Nuix Discover page that appears for your group after you log in to the application. For example, if your administrator sets the start page for your group to be the Documents page, that page appears after you log in, and Workspace A appears.

View pane: MHT documents converted to PDF in the Native view in the View pane

The application now converts .mht documents to a PDF format when you access them in the Native view in the View pane, as shown in the following figure.

Native view in the View pane

Security > Features: Memo Editor pane configuration

To make the Memo Editor pane available to users, on the Case Home > Security > Features page, an administrator must set the Document – Memo editor feature to Allow for a group. By default, this feature is set to Deny.

Security > Groups: Set the default start page for a group

You can now set a default start page for a group. One of the benefits of this new feature is that you can, for example, route users directly to the Documents page so that they can start reviewing documents. Workspace A appears by default on the Documents page.

Use the following procedure to set the start page for an existing group.

  1. On the Security > Groups page, in the Name column, click the link for a group.
  2. On the Properties page, in the Start page list, shown in the following figure, select one of the following start pages: Documents, Search, Transcripts, Production Pages, Security, Case Setup, Manage Document, Review Setup, Analysis.

    Note: The Case Home page is the default start page.

    Properties page Start page pick list options
  3. Click Save.

    The next time a member of the group logs in to the application, the designated start page appears. For example, if you set the Documents page as the start page, the Documents page appears by default.

Use the following procedure to change the start page for an existing group.

  1. On the Security > Groups page, on the toolbar, click Add.
  2. In the Create group dialog box, shown in the following figure, do the following:
    • In the Name box, provide a name.
    • In the Start page list, select a page.
      Create group dialog box
  3. Click Save.

Portal Management > User Administration: Require SAML users to re-enter credentials after logging out

System administrators can now require users who use a Security Assertion Markup Language (SAML) provider for authentication to re-enter their credentials after logging out of Nuix Discover.

To add this requirement, go to the Portal Home > User Administration > Identity Provider Settings page and click on the name of the configuration. On the Properties page, in the Configuration section, enter the following line:

“saml_force_reauth”: “true”

Portal Management > Processing: Download a log for a Supervisor

You can now download a log for a supervisor. The log includes error and info messages.

To download a log to a .csv format, on the Logs page for a Supervisor, click the Download logs button, shown in the following figure.

Download logs button

Portal Management > Settings: Text extraction: Update batching logic in text extraction job

In previous versions, the application processed text extraction jobs in batches using the number of files per batch that was specified in the Extract text job batch size case setting. (To access this setting, go the Portal Management > Cases and Servers > Cases page and click on the name of a case.)

To efficiently accommodate larger files, portal administrators can now set batch thresholds by file size using the Extract text job max batch file size portal setting, shown in the following figure.

To access this setting, go to the Portal Management > Settings > Portal Options page. The application determines the text extraction job batch size using whichever is smaller in file size: the number of files specified in the case setting or the maximum file size per batch specified in the portal setting.

Portal Management - Settings - Portal Options page showing information tooltip

Import API: Delete files from the S3 bucket upon completion of an import job

If the import job setting is to copy files from S3, once the files are copied, the application deletes the files from the S3 bucket. The application deletes the files for only those import jobs that completed successfully. The application does not delete files in failed import jobs.

SaaS / Hosted Monthly Release Notes - March 2020 (10.2.005 - 10.2.008)

Translate: New and updated source languages

The Translate feature now includes additional source language options, for example, Irish and Punjabi, when translating with Microsoft.

Some of the source language options for Google have been renamed. For example, Portuguese has been renamed to Portuguese (Portugal, Brazil).

These new or updated source language options are available in the Translate workspace pane and the Tools > Translate dialog box.

Coding History for fields updated by import jobs

The Coding History feature now captures audit records for field values that are updated by import jobs for existing document records.

The Coding History pane will include the following information:

  • The updated field value.
  • The user who created the import job as well as the date and time of the import job.
  • The previously coded value that was changed.
  • The user who applied the coding as well as the date and time of the previous coding.

Note: Your administrator must grant you read access to these fields, so that the fields appear in the Coding History pane.

Imports: Delete data from S3 bucket after completing import jobs

If files in an import job are copied from S3, the application deletes the files that were in the S3 bucket once the import job is successfully completed.

Productions: New Quality Control check for annotations that are not applied to the production

An Annotations exist that are not selected to be applied quality control check has been added to the Quality Control page for productions, as shown in the following figure. This check is enabled when at least one production rule other than Custom placeholder is selected on the Production rules page.

The Annotations exist that are not selected to be applied check identifies documents that have annotations applied to them that are not applied in the production.

If the application identifies any affected documents, a message that indicates the number of documents appears in the Result column on the Quality Control page for the production. Click the message to view the affected documents on the Documents page.

Documents page

Organizations: Set default file repositories

System administrators can now set default file repositories for an organization on the organization’s Properties page, as shown in the following figure.

Properties page

Note: The lists do not populate by default. The options in the lists include the file repositories that appear on the File Repositories page for an organization.

The options in this list include:

  • Image: Image or Index repositories
  • Index file: Image or Index repositories
  • File transfer: Image or Index repositories
  • Archive: Archive repositories
  • External: External repositories

The following three new columns now appear on the File Repositories page for an organization, as shown in the following figure.

File Repositories page
  • Default repository for:
    • If a file repository is the default repository, the values for indexes or images appear in this column.
    • Note: If a file repository is not linked to an organization, the default repository value does not appear on the Properties page for the organization.

  • Archive: If the file repository is the default file repository, a dot appears in the Archive column.
  • External: If the file repository is an external file repository, a dot appears in the External column.

Organizations: Set default servers

System administrators can now set default servers for an organization on the Properties page, as shown in the following figure.

Note: The lists do not populate by default. The options in these lists include the servers that appear on the Servers page for an organization.

Servers page
  • Database server: Database servers that you have permission to access.
  • Analysis server: Analysis servers that you have permission to access.

A new Default column appears on the Servers page for an organization, as shown in the following figure.

If a server is a default server, a dot appears in the Default column.

Note: If no servers are linked to the organization, this information does not appear on the Properties page for an organization.

Properties page Defaule column

Processing > Supervisors: Logs page for RPF supervisors

A new Logs page is available in the navigation pane on the supervisor Properties page.

To access this page, from the Portal Home page, go to Portal Management > Processing > Supervisors and select a supervisor in the list. The Logs page displays log information about the supervisor, which can help you identify error messages that may not otherwise appear in the interface.

Connect API Explorer: Query assignment data for report generation

The Connect API Explorer allows you to gather assignment data to generate reports that can show process workflows, phases, and user assignments.

The following lists the available fields for an assignment object query:

  • id
  • status: Object that extracts the following values:
    • Unassigned
    • Active
    • Suspended
    • Cleared
    • Deleted
    • Revoked
  • workflow: Object to extract the following field data:
    • description
    • id
    • name
    • phases
  • phases: Object to extract the following field data:
    • documentsPerAssignment
    • id
    • locked
    • name
    • parentId
    • parentPhaseName
    • quickCode
    • validationCriteriaName
  • lot: Object to extract the following field data:
    • id
    • name
  • name
  • user
  • assignedDate
  • clearedDate
  • createdDate
  • clear
  • total

Sample query:

query {
  cases (id: 5) {
    reviewSetup {
      workflows (id: 7) {
        phases (id: 10) {
          id
        }
      }
      assignments (id: 8) {
        id
      }
    }
  }
}

Connect API Explorer: userUpdate mutation for administration tasks

The Connect API Explorer userUpdate mutation allows administrators to perform updates to multiple user accounts simultaneously. When building this mutation, you must include the userId field to identify the user accounts.

Optional fields:

  • firstName
  • lastName
  • email
  • companyId
  • identityProviderId
  • portalCategory
  • disabled
  • requirePasswordChange: Previously named forceReset
  • licenses
  • password
  • addToActiveDirectory
  • forceResetChallengeQuestions

Important: When passing a field value that is blank, the mutation will remove the field. For example, the mutation will remove the disabled field if you enter disabled: “”. When entering new values for either the firstName or lastName, the mutation updates the entire name.

Sample mutation:

mutation {
  userUpdate(input: [
    {userId: 200, firstName: “Fred”, lastName: “Doo”},
    {userId: 1, firstName: “Velma”},
    {userId: 1, lastName: “Doo”}
  ]) {
    users {
      id
      fullName
    }
  }
}

Connect API Explorer: Clone cases using caseClone mutation

The caseClone mutation allows you to quickly create new cases without having to use the Nuix Discover UI. The following describes the mutation acceptance criteria.

Required fields:

  • caseName
  • organizationId: Used to identify an organization’s default template used for cloning.

Optional fields:

  • sourceCaseId: Data based on a user’s organization. If the sourceCaseId is missing and there is a selected default template, the mutation uses the organization’s default template case. If the sourceCaseId is missing and there is no default template selected, the application returns the following message: A sourceCaseId must be included in this mutation when an organization does not have a default template case.
  • Description
  • scheduleMetricsJob = true (default): If true, schedule is set to Monthly on day 31 at 11:00 PM.

The following lists the non-configurable fields that inherit the organization’s default or have a hard-coded default:

  • active = true (default)
  • clearData = true (default)
  • databaseServerId
  • imageRepositoryId
  • indexRepositoryId
  • fileTransferRepositoryId
  • analysisServerId
  • archiveRepositoryId
  • externalRepositoryId

The following lists examples of some of the available result fields for use in the caseClone mutation:

  • processingStatus: Object that extracts the following case processing status:
    • Failure
    • Pending
    • Queued
    • Succeeded
    • SucceededWithWarnings
  • processingType: Object that extracts the following case processing type:
    • Clone
    • Connect
    • Create
    • Decommission
    • DeleteDecommissionCase
    • Edit
    • Recommission

Note: This mutation does not support the process of setting the case metrics schedule to (daily (time)), (Weekly (week day, time)), (monthly(day, time)).

Sample mutation query with defaults:

mutation clone {
  caseClone (input: {
    organizationId: 1,
    sourceCaseId: 2,
    caseName: “My new clone”
  }) {
    case {
      id
    }
  }
}

Sample mutation query with options:

mutation clone {
  caseClone (input: {
    organizationId: 1,
    sourceCaseId: 2,
    caseName: “My new clone”,
    description: “This is my cloned case”,
    scheduleMetricsJob: true
  }) {
    case {
      id
    }
  }
}

Connect API Explorer: Remove assigned users from cases using the userCaseUnassign mutation

The Connect API Explorer userCaseUnassign mutation allows you to remove assigned users from cases for easy management of case access. This mutation allows you to remove multiple assignments simultaneously by pairing a single userId to a caseId, or multiple ids to a caseId. Only the userId field allows this many-to-one removal format. All other fields can only remove in a one-to-one format.

When assigning a user to an organization, if the user has an existing assignment to that organization, the notChangedCount field will increase by the appropriate number.

Required fields:

  • userId
  • caseId

Sample mutation:

mutation {
  userCaseUnassign(input: [
    {userId: [7,9,10,15], caseId: 120},
    {userId: 11, caseId: 121},
    {userId: 8, caseId: 120}
  ]) {
    totalCount
    successCount
    errorCount
    notChangedCount
  }
}

Connect API Explorer: Assign users to organizations using the userOrganizationAssign mutation

The Connect API Explorer userOrganizationAssign mutation allows you to assign users to organizations to help manage user assignments. This mutation allows you to perform multiple assignments simultaneously by pairing a single userId to an organizationId, or multiple ids to an organizationId. Only the userId field allows this many-to-one assignment format. All other fields can only assign in a one-to-one format.

When assigning a user to an organization, if the user has an existing assignment to that organization, the notChangedCount field will increase by the appropriate number.

Required fields:

  • userId
  • organizationId

Sample mutation:

mutation {
  userOrganizationAssign(input: [
    {userId: [7,9,10,15], organizationId: 4},
    {userId: 7, organizationId: 10},
    {userId: 8, organizationId: 4}
  ]) {
    totalCount
    successCount
    errorCount
    notChangedCount
  }
}

Sample response:

{
  data: {
    userOrganizationAssign { totalCount: 6, successCount: 4, errorCount: 1, notChangedCount: 1 },
  },
  errors: [{ message: “Failed to assign the following users to organization 4: 8 }]
}

Connect API Explorer: Assign users to cases using the userCaseAssign mutation

The Connect API Explorer userCaseAssign mutation allows you to easily assign users to cases for easy management of case access. This mutation allows you to perform multiple assignments simultaneously by pairing a single userId to a caseId, or multiple ids to a caseId. Only the userId field allows this many-to-one assignment format. All other fields can only assign in a one-to-one format.

New assignments automatically set the Access Restrictions to None as the default. Currently, the mutation does not have the ability to change this setting to another option. You must modify these settings manually through the UI.

When assigning a user to a case, if the user has an existing assignment to that case, leaving the caseGroupId field blank will not change the existing caseGroupId data for that user. If a user was previously assigned to a group in a case, and that user is removed from that case, when they are re-added to the case without specifying a group, they will be placed back into the group to which they previously belonged.

When assigning a user to an organization, if the user has an existing assignment to that organization, the notChangedCount field will increase by the appropriate number.

Note: Portal administrators will not have the ability to assign a user to a case that is outside their own organization.

Required fields:

  • userId
  • caseId
  • caseUserCategory

Optional fields:

  • caseGroupId

Sample mutation:

mutation {
  userCaseAssign(input: [
    {userId: [7,9,10,15], caseId: 120, caseUserCategory: Administrator, caseGroupId: 34},
    {userId: [8], caseId: 120, caseUserCategory: GroupMember, caseGroupId: 34}
]) {
    totalCount
    successCount
    errorCount
    notChangedCount
  }
}

Connect API Explorer: Query users and their groups within cases

The Connect API Explorer allows you to query information on users and their groups within cases to help manage users and groups across review platforms. You can filter and sort the group data by name, id or userCount for NumericComparison. You can also separate the query results by page by using the standard_scroll_parameter (for example, scroll: \{start: 1, limit: 100}).

Note: To return the users of a specific group, add the user’s node under groups.

The following lists the available fields for querying user and group data:

  • groups: Object to extract the following field data:
    • id
    • name
    • userCount
    • timelineDate
    • quickCode
    • startPage
    • users

Sample query:

query cases {
  cases(id:5){
    name
    groups (id: 17 name: “group name” sort: [{ field: Name, dir: Asc }]) {
      id
      name
      userCount
      users {
        id
        name
      }
    }
  }
}

Connect API Explorer: Cross organization cloning using caseClone mutation

The mutation caseClone now allows the cloning of organizations without using the UI Extensions. The following is the acceptance criteria when using this process.

Required Fields:

  • caseName: Required data.
  • organizationId: Required data.
  • souceCaseId: Optional data with defaults based on user’s organization.
    • When not included, the mutation will use the organization’s default case template.
    • When not included and there is no default case template, the mutation uses the portal default case template.
    • When not included and there is no default case template or a portal case template, the application returns the following message: A sourceCaseId must be included in this mutation when the portal and organization do not have a default template case.
  • description: Optional data.
  • scheduleMetricsJob = true (default): Optional data. If true, schedule is set to Monthly on day 31 at 11:00 PM.
    • The mutation does not support setting the case metrics schedule as (daily (time)), (Weekly (week day, time)), (monthly(day, time)).

The following are non-configurable fields and inherit the organization defaults or have a hard-coded default:

  • active = true (default)
  • clearData = true (default)
  • databaseServerId
  • imageRepositoryId
  • indexRepositoryId
  • fileTransferRepositoryId
  • analysisServerId
  • archiveRepositoryId
  • externalRepositoryId

The following is an example of how to use these defaults and options.

Sample mutation with defaults:

mutation clone {
  caseClone(input: {
    sourceCaseId: 1,
    caseName: “My new cloned case”
  }) {
    case {
      id
    }
  }
}

Sample mutation with options:

mutation clone {
  caseClone(input: {
    organizationId: 11,
    sourceCaseId: 12,
    caseName: “My new cloned case”,
    description: “This case is described”,
    scheduleMetricsJob: true
  }) {
    case {
      id
    }
  }
}

SaaS / Hosted Monthly Release Notes - February 2020 (10.2.001 - 10.2.004)

Exports: New image formatting options

When exporting images in a custom export, you now have the option to convert images to searchable PDFs. This option is available on the Image settings page of the Export window.

Export - Image settings page showing Image format for PDFs

In the Image format list, if you select Convert to searchable PDFs, the application converts any non-PDF endorsable image files into searchable PDFs. For existing PDF image files, the application embeds text in the PDF file.

Note: If you select an option for image formatting that converts an image type, only the exported image file is affected. No files on the Nuix Discover fileshare are altered.

In the Image format list, if you select either Convert to searchable PDFs or Embed OCR text in existing PDFs, additional options are available. These options include PDF resolution, Performance, Auto-rotate, Despeckle, Deskew, and Languages. These options existed in previous releases for embedding OCR text in existing PDFs. However, the list of language options has been expanded to match the list of language options that is available in the OCR tool on the Documents page. On the Image settings page, you can click the Settings button (or gear) and select languages in the Settings window. The default language is English.

If you select either the Convert to searchable PDFs or Embed OCR text in existing PDFs options, you also have the option to select the Unless annotations or footers are applied, do not run OCR on PDFs if the documents are already coded as searchable check box. This check box is selected by default. When selected, for any existing PDF files, the application checks the Document OCR Status field. If that field is set to Completed – Embedded text in the PDF or Completed with warnings – Embedded text in the PDF and no annotations or footers are applied on any page of the document, then the application does not attempt to make that PDF file searchable.

Note: The application updates the Document OCR Status field for base or rendition documents if they are made searchable using the OCR tool on the Documents page. The application also updates this field through the production print process on rendition documents, if the option to embed text in existing PDFs is selected. If you make PDFs searchable using the OCR tool or the production print process, the language options may not be the same as the options selected during export.

Export - Image settings page Recognized language options

For efficiency, if the Unless annotations or footers are applied, do not run OCR on PDFs if the documents are already coded as searchable check box is not selected, the application attempts to make searchable only those pages that need to be.

  • The application attempts to make each page searchable that has annotations or footers.
  • If no annotations or footers exist on a page, the application checks for any text on the page. If text exists, the application uses the original page. Otherwise, the application attempts to make the page searchable.
  • Note: Language selections for exports may be different than the languages selected when making the original page searchable.

Productions: New PDF Settings page

We have added a new settings page for productions named PDF settings. This page contains settings that previously appeared on the Endorsements settings page when the Enable PDF annotations option was set for a case.

Note: When the Enable PDF annotations option is not set for the case at the time that a production is created, the PDF Settings page does not appear for that production.

Language options have been expanded on the new PDF Settings page. When embedding OCR text in PDF images during the production print process, you can select from the same list of languages to use for text recognition that appears in the OCR tool on the Documents page. You can also select more than one language.

PDF Settings page

If the Embed OCR text in existing PDF images option is selected on the page, the application updates the Document OCR Status field (and if needed, the Document OCR Error Details field) for the rendition document to reflect the OCR status of the PDF image of the rendition.

Connect API Explorer: Query extensions in the API

There is a new query in the Nuix Discover Connect API Explorer for retrieving a list of extensions.

This query retrieves the following extension data:

  • Id: Integer.
  • Name: String.
  • Location: Enumerator.
  • Configuration: String.
  • Description: String.
  • URL: String.

Sample query:

{
  extensions {
    id
    name
    location
    configuration
    url
    description
    createdDate
    createdByUser {
      id
      fullName
    }
  }
}

SaaS / Hosted Monthly Release Notes - March 2020 (10.2.005 - 10.2.008)

Translate: New and updated source languages

The Translate feature now includes additional source language options, for example, Irish and Punjabi, when translating with Microsoft.

Some of the source language options for Google have been renamed. For example, Portuguese has been renamed to Portuguese (Portugal, Brazil).

These new or updated source language options are available in the Translate workspace pane and the Tools > Translate dialog box.

Coding History for fields updated by import jobs

The Coding History feature now captures audit records for field values that are updated by import jobs for existing document records.

The Coding History pane will include the following information:

  • The updated field value.
  • The user who created the import job as well as the date and time of the import job.
  • The previously coded value that was changed.
  • The user who applied the coding as well as the date and time of the previous coding.

Note: Your administrator must grant you read access to these fields, so that the fields appear in the Coding History pane.

Imports: Delete data from S3 bucket after completing import jobs

If files in an import job are copied from S3, the application deletes the files that were in the S3 bucket once the import job is successfully completed.

Productions: New Quality Control check for annotations that are not applied to the production

An Annotations exist that are not selected to be applied quality control check has been added to the Quality Control page for productions, as shown in the following figure. This check is enabled when at least one production rule other than Custom placeholder is selected on the Production rules page.

The Annotations exist that are not selected to be applied check identifies documents that have annotations applied to them that are not applied in the production.

If the application identifies any affected documents, a message that indicates the number of documents appears in the Result column on the Quality Control page for the production. Click the message to view the affected documents on the Documents page.

Documents page

Organizations: Set default file repositories

System administrators can now set default file repositories for an organization on the organization’s Properties page, as shown in the following figure.

Properties page

Note: The lists do not populate by default. The options in the lists include the file repositories that appear on the File Repositories page for an organization.

The options in this list include:

  • Image: Image or Index repositories
  • Index file: Image or Index repositories
  • File transfer: Image or Index repositories
  • Archive: Archive repositories
  • External: External repositories

The following three new columns now appear on the File Repositories page for an organization, as shown in the following figure.

File Repositories page
  • Default repository for:
    • If a file repository is the default repository, the values for indexes or images appear in this column.
    • Note: If a file repository is not linked to an organization, the default repository value does not appear on the Properties page for the organization.

  • Archive: If the file repository is the default file repository, a dot appears in the Archive column.
  • External: If the file repository is an external file repository, a dot appears in the External column.

Organizations: Set default servers

System administrators can now set default servers for an organization on the Properties page, as shown in the following figure.

Note: The lists do not populate by default. The options in these lists include the servers that appear on the Servers page for an organization.

Servers page
  • Database server: Database servers that you have permission to access.
  • Analysis server: Analysis servers that you have permission to access.

A new Default column appears on the Servers page for an organization, as shown in the following figure.

If a server is a default server, a dot appears in the Default column.

Note: If no servers are linked to the organization, this information does not appear on the Properties page for an organization.

Properties page Defaule column

Processing > Supervisors: Logs page for RPF supervisors

A new Logs page is available in the navigation pane on the supervisor Properties page.

To access this page, from the Portal Home page, go to Portal Management > Processing > Supervisors and select a supervisor in the list. The Logs page displays log information about the supervisor, which can help you identify error messages that may not otherwise appear in the interface.

Connect API Explorer: Query assignment data for report generation

The Connect API Explorer allows you to gather assignment data to generate reports that can show process workflows, phases, and user assignments.

The following lists the available fields for an assignment object query:

  • id
  • status: Object that extracts the following values:
    • Unassigned
    • Active
    • Suspended
    • Cleared
    • Deleted
    • Revoked
  • workflow: Object to extract the following field data:
    • description
    • id
    • name
    • phases
  • phases: Object to extract the following field data:
    • documentsPerAssignment
    • id
    • locked
    • name
    • parentId
    • parentPhaseName
    • quickCode
    • validationCriteriaName
  • lot: Object to extract the following field data:
    • id
    • name
  • name
  • user
  • assignedDate
  • clearedDate
  • createdDate
  • clear
  • total

Sample query:

query {
  cases (id: 5) {
    reviewSetup {
      workflows (id: 7) {
        phases (id: 10) {
          id
        }
      }
      assignments (id: 8) {
        id
      }
    }
  }
}

Connect API Explorer: userUpdate mutation for administration tasks

The Connect API Explorer userUpdate mutation allows administrators to perform updates to multiple user accounts simultaneously. When building this mutation, you must include the userId field to identify the user accounts.

Optional fields:

  • firstName
  • lastName
  • email
  • companyId
  • identityProviderId
  • portalCategory
  • disabled
  • requirePasswordChange: Previously named forceReset
  • licenses
  • password
  • addToActiveDirectory
  • forceResetChallengeQuestions

Important: When passing a field value that is blank, the mutation will remove the field. For example, the mutation will remove the disabled field if you enter disabled: “”. When entering new values for either the firstName or lastName, the mutation updates the entire name.

Sample mutation:

mutation {
  userUpdate(input: [
    {userId: 200, firstName: “Fred”, lastName: “Doo”},
    {userId: 1, firstName: “Velma”},
    {userId: 1, lastName: “Doo”}
  ]) {
    users {
      id
      fullName
    }
  }
}

Connect API Explorer: Clone cases using caseClone mutation

The caseClone mutation allows you to quickly create new cases without having to use the Nuix Discover UI. The following describes the mutation acceptance criteria.

Required fields:

  • caseName
  • organizationId: Used to identify an organization’s default template used for cloning.

Optional fields:

  • sourceCaseId: Data based on a user’s organization. If the sourceCaseId is missing and there is a selected default template, the mutation uses the organization’s default template case. If the sourceCaseId is missing and there is no default template selected, the application returns the following message: A sourceCaseId must be included in this mutation when an organization does not have a default template case.
  • Description
  • scheduleMetricsJob = true (default): If true, schedule is set to Monthly on day 31 at 11:00 PM.

The following lists the non-configurable fields that inherit the organization’s default or have a hard-coded default:

  • active = true (default)
  • clearData = true (default)
  • databaseServerId
  • imageRepositoryId
  • indexRepositoryId
  • fileTransferRepositoryId
  • analysisServerId
  • archiveRepositoryId
  • externalRepositoryId

The following lists examples of some of the available result fields for use in the caseClone mutation:

  • processingStatus: Object that extracts the following case processing status:
    • Failure
    • Pending
    • Queued
    • Succeeded
    • SucceededWithWarnings
  • processingType: Object that extracts the following case processing type:
    • Clone
    • Connect
    • Create
    • Decommission
    • DeleteDecommissionCase
    • Edit
    • Recommission

Note: This mutation does not support the process of setting the case metrics schedule to (daily (time)), (Weekly (week day, time)), (monthly(day, time)).

Sample mutation query with defaults:

mutation clone {
  caseClone (input: {
    organizationId: 1,
    sourceCaseId: 2,
    caseName: “My new clone”
  }) {
    case {
      id
    }
  }
}

Sample mutation query with options:

mutation clone {
  caseClone (input: {
    organizationId: 1,
    sourceCaseId: 2,
    caseName: “My new clone”,
    description: “This is my cloned case”,
    scheduleMetricsJob: true
  }) {
    case {
      id
    }
  }
}

Connect API Explorer: Remove assigned users from cases using the userCaseUnassign mutation

The Connect API Explorer userCaseUnassign mutation allows you to remove assigned users from cases for easy management of case access. This mutation allows you to remove multiple assignments simultaneously by pairing a single userId to a caseId, or multiple ids to a caseId. Only the userId field allows this many-to-one removal format. All other fields can only remove in a one-to-one format.

When assigning a user to an organization, if the user has an existing assignment to that organization, the notChangedCount field will increase by the appropriate number.

Required fields:

  • userId
  • caseId

Sample mutation:

mutation {
  userCaseUnassign(input: [
    {userId: [7,9,10,15], caseId: 120},
    {userId: 11, caseId: 121},
    {userId: 8, caseId: 120}
  ]) {
    totalCount
    successCount
    errorCount
    notChangedCount
  }
}

Connect API Explorer: Assign users to organizations using the userOrganizationAssign mutation

The Connect API Explorer userOrganizationAssign mutation allows you to assign users to organizations to help manage user assignments. This mutation allows you to perform multiple assignments simultaneously by pairing a single userId to an organizationId, or multiple ids to an organizationId. Only the userId field allows this many-to-one assignment format. All other fields can only assign in a one-to-one format.

When assigning a user to an organization, if the user has an existing assignment to that organization, the notChangedCount field will increase by the appropriate number.

Required fields:

  • userId
  • organizationId

Sample mutation:

mutation {
  userOrganizationAssign(input: [
    {userId: [7,9,10,15], organizationId: 4},
    {userId: 7, organizationId: 10},
    {userId: 8, organizationId: 4}
  ]) {
    totalCount
    successCount
    errorCount
    notChangedCount
  }
}

Sample response:

{
  data: {
    userOrganizationAssign { totalCount: 6, successCount: 4, errorCount: 1, notChangedCount: 1 },
  },
  errors: [{ message: “Failed to assign the following users to organization 4: 8 }]
}

Connect API Explorer: Assign users to cases using the userCaseAssign mutation

The Connect API Explorer userCaseAssign mutation allows you to easily assign users to cases for easy management of case access. This mutation allows you to perform multiple assignments simultaneously by pairing a single userId to a caseId, or multiple ids to a caseId. Only the userId field allows this many-to-one assignment format. All other fields can only assign in a one-to-one format.

New assignments automatically set the Access Restrictions to None as the default. Currently, the mutation does not have the ability to change this setting to another option. You must modify these settings manually through the UI.

When assigning a user to a case, if the user has an existing assignment to that case, leaving the caseGroupId field blank will not change the existing caseGroupId data for that user. If a user was previously assigned to a group in a case, and that user is removed from that case, when they are re-added to the case without specifying a group, they will be placed back into the group to which they previously belonged.

When assigning a user to an organization, if the user has an existing assignment to that organization, the notChangedCount field will increase by the appropriate number.

Note: Portal administrators will not have the ability to assign a user to a case that is outside their own organization.

Required fields:

  • userId
  • caseId
  • caseUserCategory

Optional fields:

  • caseGroupId

Sample mutation:

mutation {
  userCaseAssign(input: [
    {userId: [7,9,10,15], caseId: 120, caseUserCategory: Administrator, caseGroupId: 34},
    {userId: [8], caseId: 120, caseUserCategory: GroupMember, caseGroupId: 34}
]) {
    totalCount
    successCount
    errorCount
    notChangedCount
  }
}

Connect API Explorer: Query users and their groups within cases

The Connect API Explorer allows you to query information on users and their groups within cases to help manage users and groups across review platforms. You can filter and sort the group data by name, id or userCount for NumericComparison. You can also separate the query results by page by using the standard_scroll_parameter (for example, scroll: \{start: 1, limit: 100}).

Note: To return the users of a specific group, add the user’s node under groups.

The following lists the available fields for querying user and group data:

  • groups: Object to extract the following field data:
    • id
    • name
    • userCount
    • timelineDate
    • quickCode
    • startPage
    • users

Sample query:

query cases {
  cases(id:5){
    name
    groups (id: 17 name: “group name” sort: [{ field: Name, dir: Asc }]) {
      id
      name
      userCount
      users {
        id
        name
      }
    }
  }
}

Connect API Explorer: Cross organization cloning using caseClone mutation

The mutation caseClone now allows the cloning of organizations without using the UI Extensions. The following is the acceptance criteria when using this process.

Required Fields:

  • caseName: Required data.
  • organizationId: Required data.
  • souceCaseId: Optional data with defaults based on user’s organization.
    • When not included, the mutation will use the organization’s default case template.
    • When not included and there is no default case template, the mutation uses the portal default case template.
    • When not included and there is no default case template or a portal case template, the application returns the following message: A sourceCaseId must be included in this mutation when the portal and organization do not have a default template case.
  • description: Optional data.
  • scheduleMetricsJob = true (default): Optional data. If true, schedule is set to Monthly on day 31 at 11:00 PM.
    • The mutation does not support setting the case metrics schedule as (daily (time)), (Weekly (week day, time)), (monthly(day, time)).

The following are non-configurable fields and inherit the organization defaults or have a hard-coded default:

  • active = true (default)
  • clearData = true (default)
  • databaseServerId
  • imageRepositoryId
  • indexRepositoryId
  • fileTransferRepositoryId
  • analysisServerId
  • archiveRepositoryId
  • externalRepositoryId

The following is an example of how to use these defaults and options.

Sample mutation with defaults:

mutation clone {
  caseClone(input: {
    sourceCaseId: 1,
    caseName: “My new cloned case”
  }) {
    case {
      id
    }
  }
}

Sample mutation with options:

mutation clone {
  caseClone(input: {
    organizationId: 11,
    sourceCaseId: 12,
    caseName: “My new cloned case”,
    description: “This case is described”,
    scheduleMetricsJob: true
  }) {
    case {
      id
    }
  }
}

SaaS / Hosted Monthly Release Notes - January 2020 (10.1.009 - 10.2.000)

Imports: Run indexing and enrichment using an import job

The Imports feature now allows you to request an indexing and enrichment job after an import job completes. On the Case Home > Manage Documents > Imports page, the Import Details page contains an option to Run indexing and enrichment, as shown in the following figure.

Import Details page

Selecting this option will run an indexing and enrichment job immediately after an import job completes. After adding a new import job, you can verify the selection of this option by clicking on the Import ID for that job and looking under the Import Details section of the Properties page, as shown in the following figure. The Run Indexing and Enrichment property indicates Yes if selected, or No if not selected.

Images and Natives Properties page

Ingestions: Add new system fields for ingestions

We have added the following three system fields to the Ingestions feature:

  • [Meta] Message Class: The message class MAPI property for email files. By default, this field is checked on the Customize Fields page in the Advanced Settings window for ingestions.
  • [Meta] PDF Properties: Extracted properties specific to PDF files. Most files will have multiple properties. Each value in this field has the name of the property followed by the value for that property. By default, this field is checked on the Customize Fields page in the Advanced Settings window for ingestions.
  • [Meta] Transport Message Headers: The message header for email files. By default, this field is unchecked on the Customize Fields page in the Advanced Settings window for ingestions.

Ingestions: NIST list updated - September 2019

Ingestions now uses an updated version of this list, released in September 2019. For more information, go to https://www.nist.gov/itl/ssd/software-quality-group/national-software-reference-library-nsrl.

Ingestions: Improvements to functionality and performance

Ingestions now uses the Nuix Workstation 8.2 processing engine. As a result, improvements to Ingestions include the following.

  • Handling of OneNote files is improved.
    • More content and attachments are extracted from OneNote data.
  • Support has been added for HEIC/HEIF file formats.
  • CAD drawing attachments are no longer treated as immaterial.
  • General improvements have been made to processing EnCase L01 files.

For a full list of features, see the Nuix Workstation 8.2 documentation.

Ingestions: Add error message information for corrupt documents

When the application encounters an ingestions error because of a corrupt document, information about that error appears in the [RT] Ingestion Detail field.

Load File Templates: Add new fields to the Variable builder for Load file templates

We have added two new expressions as options for load file template field values: Attach Count and Attach Filenames. These options are available for both general and production load file templates.

  • The Attach Count expression returns the number of immediate attachments associated with a parent document. If there are no immediate attachments, no value will be returned in the field.
  • The Attach Filenames expression lists the file names for immediate attachments associated with a parent document. The file name values are from the [Meta] File Name field. If there are no immediate attachments, no value will be returned in the field.

Processing > Jobs: Gather case metrics job captures total file size of base documents for non-document entity items

When you run a Gather case metrics job, in addition to capturing the file size of image, native, and content files associated with base documents, the application now also captures the total file size of the image, native, and content files associated with non-document entity items. This information appears in the Base documents (GB) column on the Portal Management > Reports > Hosted Details page.

Connect API Explorer: GraphQL and GraphQL Parser version upgrade

Connect API Explorer now contains the latest upgraded version of GraphQL (v2.4.0) and GraphQL Parser (v4.1.2). These upgrades require a few minor changes to your existing API queries and codes that are declaring Date variables.

In any existing API queries, the Date variable needs to change from Date to DateTime. The following figure is an example of an existing query declaring a Date variable before the upgrade.

Connect API Explorer API page showing Date variable

This next figure shows the needed change for the upgraded version of GraphQL.

Connect API Explorer API page showing DateTime variable

Connect API Explorer: API token enhancements

Newly created API authorization tokens no longer require separate API keys and will never expire. On the User Administration > API Access page, the API key label now shows the following message: The API key is not required for new authorization tokens.

The API authorization changes are backward compatible to accept existing authorization tokens, which will expire after three years.

To get a new key for an existing user, on the User Administration > API Access page, clear the Authorize this user to use the Connect API check box. Then select this option again to reactivate their authorization.

Connect API Explorer: New userAdd mutation

The new mutation userAdd allows the addition of new user accounts using the API. The following lists the accepted input data for this mutation.

  • firstName: Required data.
  • lastName: Required data.
  • username: Required data.
  • password: Required data.
  • email.
  • licenses: Default is Yes.
  • forceReset: Default is Yes.
  • portalCategory: Required and follows the same rules as in the user interface (UI) of what the user passing in the mutation can assign.
  • organizationID: Follows the same rules as in the UI of what the user passing in the mutation can assign.
  • companyID.
  • addtoActiveDirectory: Required and default is Yes.

The following is an example of how to use this mutation.

Sample Mutation:

mutation newuser {
  userAdd(input: {firstName: "new", lastName: "user", userName: "newuser", password: "Qwerty12345", email: "newuser@user.com", forceReset: false, portalCategory: PortalAdministrator, licenses: 1, addToActiveDirectory: true}) {
    users {
      id
      organizations {
        name
        id
        accountNumber
      }
      identityProvider
      userName
      fullName
      companyName
    }
  }
}

Connect API Explorer: New userDelete mutation

The new mutation userDelete allows the deletion of user accounts using the API so that you can integrate your user management application with Nuix Discover. The following lists the accepted input data for this mutation.

  • If all users exist, executing the userDelete mutation with single or multiple userid values will delete all specified users.
  • If some users do not exist, executing the userDelete mutation with single or multiple userid values will delete the specified valid users. In return, the user id values as null.
  • If no users exist, executing the userDelete mutation with single or multiple userid values will return, the user id values as null.

Fields:

  • userID: An integer that identifies the user in the portal.

The following is an example of how to use this mutation.

Sample Mutation:

mutation userDelete {
  userDelete(input: {userId: [231]}) {
    users {
      id
    }
  }
}

Connect API Explorer: Access and download API documentation

There are two new buttons available on the Connect API Explorer page, as shown in the following figure.

API Download and Open Docs buttons

The Open docs button accesses additional API documentation that contains more in-depth guidance on creating and handling queries and mutations. When you click the Open docs button, the Connect API Documentation tab appears containing the API documentation, as shown in the following figure. On the left are active links that access individual topics. Clicking these links will scroll the page up or down to the selected topic.

API Documentation

Note: The top-right corner of Connect API Documentation tab shows your specific URL location of the documentation and the current version of the document.

To download the documentation, click Download docs. This downloads the documentation as a Hypertext Markup Language (HTML) page for viewing in any browser window.

Import API: Run indexing and enrichment using createImportJob mutation

The createImportJob mutation now contains a parameter for running an indexing and enrichment job after an import job completes.

  • Name: runIndexing
  • Type: Boolean
  • Required: No
  • Default: false

The following is an example of how to use this parameter.

Sample mutation:

mutation {
  createImportJob (
    caseId:26,
    options:{
      name:"My Import Job",
      description:"Import job description",
      level:"Imports/Custodian A/0001",
      docsPerLevel:1000,
      updateGroupCoding:true,
      runIndexing:true
    }
  )
  {
    rdxJobId
  }
}

Note: If this parameter is set to true, an indexing and enrichment process will run after the import job.

Import API: Run deduplication in import job

The createImportJob mutation now allows the option to suppress documents from the import job as duplicates. When the runDeduplication parameter is set to true, the job will use the deduplication settings associated with Ingestions processing as follows:

  • Use the default setting for Case or Custodian. If there is no default setting, use Case.
  • Use the default setting for Only use the top parent documents to identify duplicates. If there is no default setting, use False.
  • Do not retain suppressed files regardless of the setting.

The following are some additional considerations that will take place during processing:

  • The Imports feature codes all imported documents with a Yes in the Exclude from Ingestions Deduplication field. Coding of this field will not take place if selecting deduplicate and the setting is Case or Custodian.
  • The files within suppressed documents will not transfer.
  • If suppressing a document that contains an existing document ID in main_suppressed, the application returns the following message: Document <doc ID> was identified as a duplicate to be suppressed, but it was not suppressed because a document with the same Document ID has already been suppressed in this case.

In the createImportJob mutation, add one or more of the following parameters under options:

  • Name: runDeduplication
  • Type: boolean
  • Required: No
  • Default: False

Note: Select runDeduplication to run deduplication on the documents within this import, and to suppress duplicates. This process will use the deduplication settings for Ingestions.

The following is an example of how to use these parameters.

Sample mutation:

mutation {
  createImportJob (
    caseId:26,
    options:
    {
      level:"Imports",
      docsPerLevel:1000,
      updateGroupCoding:True,
      runDeduplication:True
    }
  )
  {
    rdxJobId
  }
}

On the Properties page for an import job, found on the Case Home > Manage Documents > Imports page, there is a new row under Statistics that reports on the number of suppressed documents, as shown in the following figure. This new row will only appear when using the deduplication option. If no duplicates are found, the value will appear as zero.

Import Job Statistices data

Import API: Assign sequential document IDs in an import job

The createImportJob mutation now contains parameters for assigning sequential document ID values for documents in the job.

  • Name: documentIdFormat
  • Valid values: Sequential or Existing
  • Required: No
  • Default: Existing

Note: Use a value of Sequential to have the application reassign document ID values for the documents within this import. Assignment of document IDs uses the provided prefix beginning with the next available document ID number matching that prefix and incrementing by 1 for each document.

  • Name: documentIdPrefix
  • Type: String
  • Required: No

Note: This is static text that appears at the beginning of each document ID only when using Sequential for the documentIdFormat option. If you do not provide this option, the application will use the document ID prefix setting from the Ingestions default settings.

When the documentIdFormat option is Sequential, the job generates a new document ID for all documents within the job. The generated ID will consist of a prefix from documentIdPrefix and a number value padded to nine digits beginning with the next available number in the case with the same prefix.

Document source and attachment relationships generate using the references in parentId based on the provided document ID values. If using sequential renumbering, document source and attachment relationships will generate only based on the parentId references within this job. Documents will not attach to prior existing documents.

If the document contains only one page, the page label will match the document ID. For documents containing multiple pages, the page labels update as DocID-00001, DocID-00002, DocID-00003, consecutively to the last page.

For files that are in pages, the page file name will match the existing page label such as DocID-00001.tif, DocID-00002.tif, and so on. For files not in pages, the file is named after the document ID, like DocID.xls.

The following is an example of how to use these parameters.

Sample mutation:

mutation {
  createImportJob (
    caseId:26,
    options:
    {
      level:"Imports",
      docsPerLevel:1000,
      updateGroupCoding:True,
      documentIdFormat:Sequential,
      documentIdPrefix:"Doc_"
    }
  )
  {
    rdxJobId
  }
}

Import API: Transfer files from S3 in createImportJob mutation

The createImportJob mutation now contains parameters to transfer files from S3.

  • Name: fileTransferLocation
  • Valid values: AmazonS3 or Windows
  • Required: No
  • Default: Windows

Note: The default is Windows. When selecting Windows, the files copy from the file repository designated for Images under the import\<case name> folder. When selecting AmazonS3, this mutation returns information needed to access the S3 bucket.

These Options parameters will allow you to request transfer of the following S3 return values within the fileTransferLocationInformation parameter:

  • accessKey
  • secretAccessKey
  • token
  • repositoryType
  • regionEndpoint
  • bucketName
  • rootPrefix
  • expiration

Note: When the fileTransferLocation is AmazonS3, the mutation copies the files from the Amazon S3 bucket and folder created for the job rather than from the import folder on the agent.

The following is an example of how to use these parameters.

Sample mutation:

mutation {
  createImportJob (
    caseId:26,
    options:
    {
      level:"Imports",
      docsPerLevel:1000,
      updateGroupCoding:True,
      fileTransferLocation:AmazonS3
    }
  )
  {
    rdxJobId
    fileTransferLocationInfo
    {
        accessKey
        secretAccessKey
        token
        repositoryType
        regionEndpoint
        bucketName
        rootPrefix
        expiration
    }    
  }
}

Sample returned data:

{
  "data": {
    "createImportJob": {
      "rdxJobId": 1040,
      "temporaryFileTransferLocationConnectInfo": {
        "accessKey": "AEK_AccessKeyId",
        "secretAccessKey": "AEK_SecretAccessKey",
        "token": "AEK_SessionToken",
        "repositoryType": "AmazonS3",
        "regionEndpoint": "AEK_Region",
        "bucketName": "AEK_Bucket",
        "rootPrefix": "AEK_JobPrefix",
        "expiration": "2019-11-27T07:04:29.601994Z"
      }
    }
  }
}

Import API: New importJobS3Refresh mutation to refresh S3 credentials

The new mutation called importJobS3Refresh allows you to refresh credentials for an S3 folder created as part of an import job. These credentials expire after 12 hours. However, it is possible that transfer of files will continue past this time frame.

The importJobS3Refresh mutation passes the caseId and rdxJobId that allows you to look up the folder information. The mutation also passes the original accessKey and the original secretAccessKey that validate and match the originally provided keys as an additional security measure.

The following describes the mutation and parameters:

  • importJobS3Refresh: Obtains new file transfer location information for an existing import job.
  • accessKey (parameter): Uses the accessKey value previously returned for this import job.
  • secretAccessKey (parameter): Uses the secretAccessKey value previously returned for this import job.

If there is no S3 information for the provided job ID, the application returns the following error: There is no information available for this rdxJobId. If the accessKey or secretAccessKey does not match, the application returns the following error: The keys provided do not match the keys for this rdxJobId.

The following is an example of how to use these parameters and the possible returned data.

Sample mutation:

mutation {
  importJobS3Refresh (
    caseId:26,
    rdxJobId:324,
    accessKey:"AEK_AccessKeyId_Old",
    secretAccessKey:"AEK_SecretAccessKey_Old"
  )
  {
    rdxJobId
    fileTransferLocationInfo
    {
        accessKey
        secretAccessKey
        token
        repositoryType
        regionEndpoint
        bucketName
        rootPrefix
        expiration
    }    
  }
}

Sample returned data:

{
  "data": {
    "importJobS3Refresh": {
      "rdxJobId": 1040,
      "fileTransferLocationInfo": {
        "accessKey": "AEK_AccessKeyId",
        "secretAccessKey": "AEK_SecretAccessKey",
        "token": "AEK_SessionToken",
        "repositoryType": "AmazonS3",
        "regionEndpoint": "AEK_Region",
        "bucketName": "AEK_Bucket",
        "rootPrefix": "AEK_JobPrefix",
        "expiration": "2019-11-27T07:04:29.601994Z"
      }
    }
  }
}

Import API: Modifications to parameter requirements in FieldParams

The following are changes to the type and onetomany field parameters. FieldParams no longer requires these parameters.

  • When not providing the type field parameter, the application will match on the field name only.
    • If no match is found, the application records the following error: The value for field <field name> for document <Document ID> was not imported. No such field exists, and no field type was provided to create a new field.
    • If a match is found on multiple existing fields, data will not import, and the application records the following error: The value for field <field name> for document <Document ID> was not imported. Multiple fields exist with the name provided, and no field type was provided.
  • When not providing the onetomany field parameter, if no match is found on the field name, the application creates a new field as one-to-many.

SaaS / Hosted Monthly Release Notes - December 2019 (10.1.005 - 10.1.008)

Analysis > Predictive Coding > Add custom Predictive Coding Templates

The Predictive Coding Templates page has been added to the Analysis capabilities in Nuix Discover and is available to all administrators. This page allows administrators to select the Standard or Standard + people template when setting up predictive coding or Continuous Active Learning (CAL) models, or to create their own templates.

Note: The Standard and Standard + people templates are available to all cases and cannot be modified.

Create a new Predictive Coding Template

To create a new template, go to the Case Home > Analysis > Predictive Coding Templates page and click Add. Add a name and description for the template, and then click Save. The Fields page opens for that template. To add fields to the template, select a field in the Add field list and click the + (plus sign) button.

Predictive Coding Templates Fields page Field selection

The following information applies to fields in a predictive coding template.

  • The values of date fields included in a template appear as text strings.
  • The weight for each field is 1 by default, but you can change the value to anything between 1 and 10. Weight reflects the amount of influence a field has on the model in relation to other fields in the template. For example, if you want People information to be more heavily considered in the model than other fields, adjust the weight value on the People fields to be higher than the other field weight values.
  • Predictive Coding Templates Fields page showing added field

The following information applies to all custom predictive coding templates.

  • Extracted text from documents is included in every template, although it is not listed as an item in the template. The training field for the model that the template is selected for is also included.
  • Once a template is being used by a CAL or predictive coding model, it cannot be edited. Open the template’s Properties page to view the names of the models that are using the template.
  • Predictive Coding Templates Properties page

Clone a Predictive Coding Template

All custom templates can be cloned, regardless of whether they are in use. To clone a template, open the Fields page for the template and click Clone template. Update the template name as needed and click Save. The Fields page for the new template opens. Add fields, delete fields, or change any of the field weights on that page.

Delete a Predictive Coding Template

You can delete any custom predictive coding template that is not in use by a predictive coding or CAL model. To delete a template, open the Fields page for the template and click Delete template.

Use Predictive Coding Templates with CAL

Administrators now have the option to select a predictive coding template when configuring training for a model. To select a template, go to the Case Home > Analysis > Populations and Samples page and select a population. Then, open the Predictive Coding page for the population and click Configure training. On the Settings page, select a template in the Predictive coding template list.

Configure training Settings page

Note: You can change the predictive model template throughout the lifecycle of the training model. However, at the present time, the application only provides data about the current template selected for training and does not record the history of different templates that have been selected.

Use Predictive Coding Templates with the Predictive Coding standard workflow

To select a predictive coding template to use when adding a predictive model, go to the Case Home > Analysis > Predictive Models page and click Add. In the Add Predictive Model dialog box, select a predictive coding template in the Predictive coding template list.

Add Predictive Model page

Portal Management > Processing > Jobs: Size of Elasticsearch index captured during Gather case metrics job

If a case uses an Elasticsearch index, the Gather case metrics job now captures the size of the Elasticsearch index. The Elasticsearch index is used to capture the coding audit history.

Portal Management > Reports: Elasticsearch index size available in the Hosted Details report

If a case uses an Elasticsearch index, you can view the size of the Elasticsearch index for a case on the Reports > Hosted Details page. The name of the new column is Elasticsearch index (GB). The Elasticsearch index is used to capture the coding audit history.

Connect API: New case statistic in the API {cases{statistics}} query

The Nuix Discover Connect API contains a new sizeOfElasticSearchIndex field that returns the total size of the Elasticsearch index for cases. The Elasticsearch index stores the audit history records for coding changes that are viewable within the Coding History pane.

The following example uses the new sizeOfElasticSearchIndex field in the cases {statistics} object.

{
  cases {
    name
    statistics {
      sizeOfElasticSearchIndex
    }
  }
}

The sizeOfElasticSearchIndex field is also part of the aggregateTotalHostedSize statistic that returns the sum of sizeofBaseDocumentsHostedDetails, sizeofRenditionsHostedDetails, aggregateDatabases, sizeOfElasticsearchIndex, dtIndexSize, sizeOfNonDocumentData, and sizeOfOrphanFiles.

SaaS / Hosted Monthly Release Notes - November 2019 (10.1.001 - 10.1.004)

Portal Management > Reports: Change the time zone

You can now change the time zone for the data that appears on the Portal Management > Reports > Usage and Hosted Details pages from local time to Coordinated Universal Time (UTC). Using UTC time allows the reports to display data consistently with reports that are generated through the API when querying for specific dates or date ranges. By default, the data appears in local time.

Use the following procedure to change the time zone from local time to UTC.

  1. On the Portal Management > Reports > Usage or Hosted Details page, on the toolbar, click the Time zone button.
  2. In the Time zone dialog box, shown in the following figure, select UTC time.
  3. Time Zone dialog box
  4. Click OK.
  5. The data displayed is then based on UTC time.

Portal Management > Reports: Subtotal column added to Hosted Details report

The Portal Management > Reports > Hosted Details page now includes a Subtotal (GB) column.

Note: The label for the Total size (GB) changed to Total (GB).

In the Subtotal (GB) column, you can view a subtotal of the active data, which includes the data in the following columns:

  • Base documents (GB)
  • Production renditions (GB)
  • Databases (GB)
  • Content index (GB)
  • Predict (GB)
  • Orphan (GB)

Portal Management > Settings > Log Options: Download a telemetry log file

The Portal Management > Settings > Log Options page includes a new button on the toolbar named Download log that you can use to download a telemetry log file. The application downloads the telemetry log data to a .log text file.

To keep the file size manageable, you can configure the number of records to maintain in the JSON string in the Telemetry archive configuration setting on the Portal Management > Settings > Log Options page. For example, as shown in the following figure, NRecentRecordsToReturn is set to 10000.

Telemetr archive configuration setting

SaaS / Hosted Monthly Release Notes - October 2019 (10.0.009 - 10.1.000)

Audio: Resubmit multiple previously transcribed documents

You can now resubmit audio documents to generate new transcriptions using the Transcribe audio option on the Tools menu. Doing so can be useful if you selected the wrong language model when you transcribed audio documents, or if errors occurred during the transcription job.

Before you resubmit previously transcribed documents, note the following:

  • After you resubmit the audio documents, the application removes any corrections that were made in the previous transcriptions.
  • You cannot resubmit documents that have annotations. Delete the annotations first.

Use the following procedure to resubmit previously transcribed audio documents.

  1. On the Tools menu, select Transcribe audio.
  2. In the Transcribe audio dialog box, shown in the following figure, do the following:
  3. Transcribe audio confirmation message
    • Under Language model, select the language. You can select one of the following audio language models:
      • Arabic (Modern Standard)
      • Brazilian Portuguese
      • Chinese (Mandarin)
      • English (UK)
      • English (US)
      • French
      • German
      • Japanese
      • Korean
      • Spanish
    • Under Optional inclusions, select the check boxes for the documents that you would like to resubmit.
  4. Click OK.

Tools > OCR processing: Languages listed in alphabetical order in the OCR processing dialog box

In the OCR processing dialog box, available languages for OCR processing now appear in alphabetical order.

Ingestions: Show level settings in Add ingestion dialog box

In the Add ingestion dialog box, a read-only display of the default level settings for the case now appears under the Family deduplication setting.

For example, select the default settings for levels, as shown in the following figure.

Default settings Levels page

These levels appear in the Add ingestion dialog box under the Levels heading, as shown in the following figure.

Add ingestion dialog box

Exports: Updates to the MDB Classic export type

Two updates have been made to the MDB Classic export type in the Export window.

  • Administrators can export a production or a set of rendition documents. In previous releases, administrators could export only binders or base documents with this export type.
    • When creating an export from the Manage Documents page, administrators can select the MDB Classic export type.
    • When selecting rendition documents from search results for export using the Tools > Export menu option, administrators can select the MDB Classic export type from the Export type list.
  • Administrators can choose to populate the pages table of an MDB export file even if no files are selected for export.
    • If an administrator selects the option to export an MDB load file in the Export window but does not select any files to export, the pages table of the exported MDB file will be empty by default. However, administrators can now populate the pages table of the MDB file anyway. On the Load files page, in the Settings window (available when you click the Settings button, or gear), select the Populate the pages table of the MDB even if no files are selected for export check box.
    • Export Renditions Load files page Settings options

SaaS / Hosted Monthly Release Notes - September 2019 (10.0.005 - 10.0.008)

Audio pane: Select a language model to use for transcription

You can now specify the language model to use for transcription. For example, if you know that the audio in a file uses British English instead of American English, you can select English (UK) as the source language before you transcribe the audio file.

To specify the language model for an individual file, select a file, and then click the Transcribe audio button in the Audio pane. In the Transcribe audio dialog box, select an option from the Language Model list, and then click OK.

Transcribe audio dialog box Language selection

To specify the language model for multiple files, select the files. On the Tools menu, select Transcribe audio. In the Transcribe audio dialog box, select an option from the Language Model list, and then click OK.

Transcribe audio dialog box Language model selection

You can select one of the following audio language models:

  • Arabic (Modern Standard)
  • Brazilian Portuguese
  • Chinese (Mandarin)
  • English (UK)
  • English (US)
  • French
  • German
  • Japanese
  • Korean
  • Spanish

Audio pane: Resubmit transcribed audio file

If you accidentally selected the wrong language model when you transcribed an audio file, you can click the Transcribe audio button in the Audio pane to resubmit the transcription using a different language model, as shown in the following figure.

Note: This functionality is not yet available for multiple files using the Tools > Transcribe audio option.

Transcribe audio dialog box confirmation message

Note: You cannot re-transcribe a file that has annotations. Delete the annotations first.

Coding History: Case administrators can see all records regardless of group membership and security

Case administrators can see all history records, including records for deleted objects, in the template views in the Coding History pane, regardless of their group membership and the group security settings for objects such as binders, fields, or productions.

Case Setup > System Fields: New system field for Audio Language Model

A new system field named Audio Language Model is available on the Case Setup > System Fields page.

Note:The application disables this field for groups by default, and you cannot grant groups write access to this field.

The application populates this field after a user submits an audio transcription from the Audio pane or from the Tools > Transcribe audio menu. The field value is the name of the language selected in the Language Model list for the audio transcription.

Audio Language model Items page

Manage Documents > Exports: Enhancements and changes to the Exports feature

You will now get the same export results regardless of the way that you choose to submit the export job. You can submit export jobs on the Manage Documents > Exports page or by using the Tools > Export feature on the Documents page.

Only administrators can export documents from the Manage Documents > Exports page. In addition, the user interface used in the Tools > Export feature is now also used on the Manage Documents > Exports page and includes the same options for administrators.

Major enhancements and changes

  • When exporting on the Manage Documents > Exports page, you can now export more than one load file at a time.
  • For base documents, you can select options to convert image files to PDF or TIFF.
  • For any load file field references to files, for page or document load files, the application now populates load file fields based on the files exported along with load files. This is different than how the Manage Documents > Exports feature worked previously for page load files. For example, in the legacy code, if you exported an MDB load file on its own, but no other files, the pages table would reflect main_pages for the documents in the export. In the updated code, if you export an MDB with no files, no updates occur to the pages table.

Other enhancements and changes

  • Exported files will exist in a folder named according to the Export name and ID under the export folder. However, as shown in the following figure, you can select a repository from the File repository list and, under Output folder path, you can also export to an existing folder instead. To select a file repository or an existing folder, on the Define export page, click the Settings (gear) button to open the Settings window, as shown in the following figure.
  • Export page Settings Options
  • When exporting using the Manage Documents > Exports feature, on the Source page, you can choose to export a Binder of documents or a Production. Depending on whether you select Binder or Production on the Source page, the options on subsequent pages will differ. This is similar to how the options change in the Exports > Tools window depending on whether you select base or rendition documents.
  • Note: This page is not enabled when using the Tools > Export feature on the Documents page because that export is based on documents selected in a search result.

    Export window Source Options
  • A new Image settings page replaces the PDF settings page.
    • For image files, users can select the option to convert images to non-searchable PDFs or to convert PDFs to TIFF. These options were previously available for production exports from the Manage Documents > Exports page and are now options for base document exports as well. If the document set already consists of PDFs, you can select the following option from the Image format list: Embed OCR text in existing PDFs. Selecting this option will not create searchable PDFs from non-PDF files.
    • Note: The Embed OCR text in existing PDFs option is available only if the Enable PDF annotations option is set for the case.

      Export window Image Settings options
  • A new Export type named MDB Classic is now available to administrators on the Define export page, as shown in the following figure.
  • Export window Define Export options
    • The MDB Classic export type makes file selection and MDB page table updates more consistent with results. This is similar to using IEM in the past.
    • On the File types page, instead of selecting the options to export endorsable images, native, and content (.txt) files, you can now choose to export Imaged pages or Content files, as shown in the following figure. If you select Imaged pages, the application exports all of the files that you can see in the Image viewer in the View pane. If you select Content files, the application exports all of the files that you can see in the Native viewer in the View pane.
    • Export window Select file types options
    • Just like for the Custom export type, on the Annotations page in the Export window, users can choose to endorse footers and annotations.
    • The options for omitting other files when a file is annotated are slightly different than the omit file options for Custom export types. For the MDB Classic export type, the default options are as follows:
      • Omit other page files if document images are annotated: When this option is selected, only the annotated files are exported. The application will exclude any other page files from the export.
      • Omit content files if document is annotated: When this option is selected, the application excludes all content files from the export.
      • Export window Apply annotations options
    • For the MDB Classic export type, you can select only MDB load files for export with the files. By default, if exporting files, the pages table of the MDB will mirror the main_pages table in the application, that is, what is seen in the Image viewer in the View pane.
    • A new option, shown in the following figure, is available for the MDB Classic export type. If needed, click the Settings (gear) button to select the following option:
      • Associate all exported files for a document in the pages table. If you select this option, all files exported will be represented in the pages table of the MDB, even if they did not exist in the main_pages table.
      • Export window Include load file options

Export Feature Summary

  • The Export feature on the Manage Documents page is available only to administrators and is always available to administrators.
    • The export set is based on a selected Binder or Production.
    • No group security is enabled for the items listed for selection. All Binders, Productions, Fields, and Annotations are listed as options.
  • The Export feature, which is an option available on the Tools menu on the Documents page, is available only if the user’s group is set to Allow on the Security > Features page for the Processing – Exports feature. The following additional information applies:
    • The export set is based on selected documents in search results.
    • Group security is enabled for the items listed for selection. Users will see fields or annotations that are allowed only for the group they are logged in as.
    • Non-administrators have access to only one export type, which is Native files only, no load file included.

The following list provides an overview of the use case, security, available file type options, and handling of base documents and renditions, as well as an overview of the updates to the MDB pages table for the three export types.

  • Export Type > Use case
    • Custom: Select this option if you want all available file options.
    • Native files only, no load file included: Select this option if you only want to export native files for a set of documents and nothing else.
    • MDB Classic: Select this option if you are loading the export to another Nuix Discover case and want the file organization or views to be the same in the target case.
  • Export Type > Security
    • Custom: Administrators only
    • Native files only, no load file included:
      • Available to administrators
      • Available to non-administrators who have access to the export feature
    • MDB Classic: Administrators only
  • Export Type > File type options available
    • Custom:
      • Endorsable image files: Any files in the Image viewer that are .tif, .tiff, .jpeg, .jpg, .bmp, .png, or .pdf (if PDF annotations are enabled in the case)
      • Native files: Highest-ranking non-txt file or file with an extension matching the field value (if specified)
      • Content files (.txt): Existing .txt file on fileshare or extracted text (for base documents)
    • Native files only, no load file included:
      • No selection available
      • The application will export only one native file per document
      • The native is the highest-ranking non-txt file or file with an extension matching the field value (if specified in case options)
    • MDB Classic:
      • Imaged pages: Any files in the Image viewer
      • Content files: Any files in the Native viewer
  • Export Type > Other options: Base documents
    • Custom:
      • Image format: Select to embed OCR text in existing PDFs, convert images to PDF, or convert PDFs to TIFF
      • Footers
      • Annotations
      • Load file: One MDB or any number of non-MDB load files
    • Native files only, no load file included:
      • Exported file structure:
        • As currently foldered in the case
        • Flattened
    • MDB Classic:
      • Image format: Select to embed OCR text in existing PDFs
      • Footers
      • Annotations
      • Load file: One MDB
  • Export Type > Other options: Rendition documents
    • Custom:
      • Image format: Select to embed text in existing PDFs, convert images to PDF, or convert PDFs to TIFF
      • Load file: One MDB or any number of non-MDB load files
    • Native files only, no load file included:
      • Exported file structure:
        • As foldered in the case
        • Flattened
    • MDB Classic: Not available for production renditions
  • Export Type > MDB pages table
    • Custom:
      • At least one file per document will be associated with a document in the pages table (as long as it was selected for export).
      • If endorsable images are exported, those will be associated with the document in the pages table.
      • If only a native file is exported for a document, it will be associated with the document in the pages table.
      • If only a content file is exported, the .txt file will be associated with the document in the pages table.
      • If you select the option to Update the pages table to mirror files in the image viewer, and if you select both endorsable images and natives for export, and both of those file types exist in the Image viewer for a document, then those files will all be associated with the document in the pages table.
      • If you do not select any files for export, the pages table will be empty.
    • Native files only, no load file included: Not applicable
    • MDB Classic:
      • The pages table will mirror files available in the Image viewer if you select Imaged pages to be exported.
      • If you do not select Imaged pages to be exported, no files will be referenced in the pages table.
      • If you select Content files to be exported as well as the option to Associate all exported files for a document in the pages table, then the content files exported will be referenced in the pages table.

Additional basic information about how exports work

  • The application copies the exported files to the case default file transfer file repository and a unique subfolder under the export folder. Administrators can change the file repository and select an existing subfolder to copy the files to.
    • The application names the subfolder under the export folder based on the export name and the export ID. The application names the load files according to the export name only, and not the export ID.
  • Exported file structure:
    • When exporting files with an MDB load file, files are exported in the same file structure as they exist in the case.
    • When exporting files with a non-MDB load file, files are separated into images, native, and text folders. However, if exporting a production from the Manage Documents page, the application respects the export path details in the production settings. Note that any system load file templates reference the default folder names of image, native, and text files.

Manage Documents > Ingestions: Improved handling of missing files in the ingest_temp folder during file transfer

In previous versions, the application could not complete the transfer of files during the ingestions process if any files were missing from the ingest_temp folder. This would often occur when files were quarantined by virus scanning software. In those instances, the application could not complete the ingestions job without manual intervention. With this release, if files cannot be copied because they do not exist in the ingest_temp location, the application does the following:

  • Creates a slipsheet for any missing file with the text “File not available to copy.” Copies the slipsheet to the proper location in the images folder and references the slipsheet in the main_pages table.
  • Codes the document with a value of “File Copy Failed” in the [Meta] Processing Exceptions field.
  • Codes the document with a value of “File not available in temporary folder” in the [RT] Ingestions Exception Detail field.
  • Updates the [Meta] File Extension - Loaded field with a value of “pdf.”
  • Codes the [Meta] File Extension - Original field with the extension of the original file.

Manage Documents > Ingestions: Support up to 10 levels

Administrators can now select up to ten levels on the Levels page in the Default settings window for Ingestions.

For each level, you can select one of the following options:

  • Constant: Enter a static value into the box.
  • Select a field: A list appears that allows you to select a field. You can select any one-to-one field that is selected on the Customize Fields page in the Advanced settings window.
  • None.
  • Existing levels: Select a level that already exists for the case.

Manage Documents > Load File Templates: Field name suffixes removed in the Variable Builder

In the Variable Builder for load file templates, the names of the field types (DATE, MEMO, NUMB, PICK, TEXT, YES/NO) no longer appear in the Name column.

Variable Builder Quick Picks tab

Portal Management > Settings: Enable telemetry logging from the portal database

You can write telemetry logging data to the portal database. This logging data includes all usage metrics and application errors for a portal.

The following settings are available on the Portal Management > Settings > Log Options page, as shown in the following figure.

Portal Home Settings page
  • Enable telemetry logging: Select this check box to enable logging for the portal.
  • Log detail level: Select an option to adjust the level of detail captured in the log: Error, Info, Debug, or Trace.
  • Log file location: If you provide a location, the telemetry data is stored in physical files on the web servers.
  • Max log files: Provide a value to indicate the number of archive (.archive.log) files to keep on the web servers.
  • Store logs in database: Select this check box to store log data in the portal database. If selected, an RPF job pushes the data to S3 and cleans up the database table per the configuration setting indicated in the Telemetry archive configuration setting.
  • Note: If this option is selected, and the Telemetry archive configuration setting is not configured, then no log entries will be deleted from the database table.

  • Telemetry archive configuration: The information in this setting controls the frequency of when the RPF job runs to upload log entries from the database table to S3 and clean up the portal database. This setting is a JSON string with the following fields:
  • {
      “Checkpoint”: “0”,
      “Key”: “AWS key”,
      “Secret”: “AWS secret key”,
      “Region”: “AWS region”,
      “Bucket”: “AWS S3 bucket name”,
      “CleanupMaxDays”: 30,
      “ScheduleId”: null,
      “IntervalInMinutes”: 60
    }
    • Checkpoint: Default to 0. This holds the value of the last successful upload to S3.
    • Key: AWS key
    • Secret: AWS secret key
    • Region: AWS region
    • Bucket: AWS S3 bucket
    • CleanupMaxDays: Cleans up database records that are older than this value.
    • ScheduleId: Defaults to null. This will be set by the RPF job and should not be modified manually.
    • IntervalInMinutes: Defaults to 60. This sets the frequency, in minutes, for the RPF scheduled job.
  • The Log Options page also includes the following additional changes:
    • The following two syslog options were removed:
      • Ringtail syslog server name
      • Ringtail syslog server port
    • Some options were renamed as follows:
      • Log enabled > Enable telemetry logging
      • Log level > Log level detail
      • Log location > Log file location
      • Max Archive Files > Max log files
      • Database log enabled > Store logs in database

Portal Management > Cases and Servers: Assign logical database names to cloned cases

In previous releases, when cloning a case, the application created database files using the name of the source case that was cloned, rather than the name of the new case. The database file names are now based on the name of the cloned case, not the original case.

Import API

There are three new mutations in the Nuix Discover Connect API for importing documents into a case: createImportJob, addDocumentsForImportJob, and submitImportJob.

Create an import job

You can create an import job in a case using the createImportJob mutation. This mutation returns the rdxJobID, which is used in the next mutation to add documents to the import job. This mutation also allows you to configure some job-level settings.

Sample mutation:

mutation {
  createImportJob (
    caseId:26,
    options:{
      name:“My Import Job”,
      description:“Import job description”,
      level:“Imports/Custodian A/0001”,
      docsPerLevel:1000
      updateGroupCoding:true
    }
  )
  {
  rdxJobId
  }
}

Sample response:

{
  “data”: {
    “createImportJob”: {
      “rdxJobId”: 319
    }
  }
}

Configurable options:

  • name: String is the name of the import job. If you do not provide a value for this option, the job name is “Import from API.”
  • description: String is the description of the import job. If you do not provide a value for this option, the job description is “Import from API.”
  • level: String determines the root level to put documents in. If you do not provide a value for this option, the level is “API/{ImportID}/0001.” Level values assigned to documents in the addDocumentsForImportJob mutation override this setting.
  • docsPerLevel: Int determines the maximum number of documents per level. If you do not provide a value for this option, the value is 1000.
  • updateGroupCoding: Boolean updates the group coding fields (All Custodians, for example) for new documents in this import and any existing or future family duplicate documents. If you do not provide a value for this option, the value is “false.”

Add documents to an import job

You can use the addDocumentsForImportJob mutation to add documents to an import job that was created using the createImportJob mutation. Each addDocumentsForImportJob mutation allows you to add up to 5000 documents. To add additional documents to the job, run multiple mutations with different documents.

Note: When defining the path value for pages and content files, the path is relative to the “import” folder in the Image file repository defined for the case.

For example, if the path is defined as follows:
path:“Imports\\Media0001\\Images\\0001\\DOC-00000001.tif”
then the file should be located at:
{Image file repository}\import\{case name}\Imports\Media0001\Images\0001\DOC-00000001.tif.

Sample mutation:

mutation {
  addDocumentsForImportJob (
    caseId:26,
    rdxJobId:319,
    documents:[
      {
      documentId:“DOC-00000001”,
      hash:“qwer1234asdf5678zxcv1234qwer5678”,
      familyhash:“poui1234asdf5678zxcv1234qwer5678”,
      level:“Imports/Custom/0001”,
      parentId:“”,
      sourceattachmentaction:Delete,
      pageaction:InsertUpdate
      mainfields:[
        {
          name:DocumentDate,value:“2019-01-03”,action:Update
        },
        {
          name:DocumentType,value:“Microsoft Outlook Message”,action:Update
        },
        {
          name:DocumentTitle,value:“Re: Your message”,action:Update
        },
        {
          name:DocumentDescription,value:“”,action:Delete
        },
        {
          name:EstimatedDate,value:“False”,action:Update       
        }
      ],
      fields:[
        {
          name:“Custodian”,onetomany:false,type:PickList,action:InsertUpdate,values:“Custodian A”
        },
        {
          name:“[Meta] Processing Exceptions”,type:PickList,action:InsertUpdate,values:[“Corrupted”,“Empty File”]
        },
        {
          name:“[Meta] File Name”,onetomany:false,type:Text,action:InsertUpdate,values:“Re: Your message.msg”
        },
        {
          name:“[Meta] File Path”,onetomany:false,type:Memo,action:InsertUpdate,values:“C:\\Downloads\\Email”
        },
        {
          name:“[Meta] File Size”,onetomany:false,type:Number,action:Delete,values:“1592”
        },
        {
          name:“[Meta] Date Sent”,onetomany:false,type:DateTime,action:InsertUpdate,values:“2019-01-03”
        },
      ],
      correspondence:[
        {
          type:“From”,people:“acustodian@example.com”,orgs:“example.com”,action:InsertUpdate
        },
        {
          type:“To”,people:“bsmith@example.com”,action:Append
        },
        {
          type:“CC”,people:[“kjohnson@example.com”,“ewilliams@example.com”],action:InsertUpdate
        }
        ],
        pages:[
          {
            pagenumber:1,pagelabel:“DOC-00000001”,path:“Imports\\Media0001\\Images\\0001\\DOC-00000001.tif”
          },
          {
            pagenumber:2,pagelabel:“DOC-00000002”,path:“Imports\\Media0001\\Images\\0001\\DOC-00000002.tif”
          }
        ]
        ,
        contentfiles:[
          {
            path:“Imports\\Media0001\\Natives\\0001\\DOC-00000001.mht”
          }
          ]
        },
        {
          documentId:“DOC-00000003”,
          hash:“6425hyjkasdf5678zxcv1234qwer5678”,
          familyhash:“poui1234asdf5678zxcv1234qwer5678”,
          level:“Imports/Custom/0001”,
          parentId:“DOC-00000001”,
          sourceattachmentaction:InsertUpdate,
          pageaction:InsertUpdate
          mainfields:[
            {
              name:DocumentDate,value:“2019-01-02”,action:Update
          },
          {
            name:DocumentType,value:“Microsoft Word”,action:Update
          },
          {
            name:DocumentTitle,value:“WordDoc.docx”,action:Update
          },
          {
            name:DocumentDescription,value:“Sample description”,action:Update
          },
          {
            name:EstimatedDate,value:“False”,action:Update       
          }
        ],
        fields:[
          {
            name:“Custodian”,onetomany:false,type:PickList,action:InsertUpdate,values:“Custodian A”
          },
          {
            name:“[Meta] File Name”,onetomany:false,type:Text,action:InsertUpdate,values:“WordDoc.docx”
          },
          {
            name:“[Meta] File Path”,onetomany:false,type:Memo,action:InsertUpdate,values:“C:\\Downloads\\Email\\Re: Your message.msg”
          },
          {
            name:“[Meta] File Size”,onetomany:false,type:Number,action:InsertUpdate,values:“74326”
          },
          {
            name:“[Meta] Date Modified”,onetomany:false,type:DateTime,action:InsertUpdate,values:“2019-01-02”
          },
        ],
        pages:[
          {
            pagenumber:1,pagelabel:“DOC-00000003”,path:“Imports\\Media0001\\Natives\\0001\\DOC-00000003.docx”
          }
        ]
      }
    ]
  )
  {
    documentCount
  }
}

Sample response:

{
  “data”: {
    “addDocumentsForImportJob”: {
      “documentCount”: 2
    }
  }
}

Configurable options:

  • documentId: String! imports the Document ID of the document.
  • hash: String imports the individual MD5 hash value of the document. This value is added to the [RT] MD5 Hash field in the case.
  • familyhash: String imports the family MD5 hash value of the document. This value is added to the [RT] Family MD5 Hash field in the case.
  • level: String, when set, overrides any level data set in the job options. Levels are not updated for existing documents.
  • parentId: String is the parent document ID for the document that establishes a source/attachment relationship. The source/attachment relationship is either updated or deleted depending on the value set for sourceattachmentaction.
  • sourceattachmentaction: SAAction determines which of the following actions to take for the parentId field:
    • Delete removes coding from the document for the field.
    • InsertUpdate inserts or updates the value(s) of the field.
  • pageaction: Action determines which of the following actions to take on the pages:
    • Update inserts or updates the value(s) of the field.
    • Delete removes coding from the document for the field.
    • Ignore ignores the value.
  • mainfields: [DocumentFieldParams] imports the following data into core document fields in the case.
    • name: DocumentField! is the name of the document field. The names correspond to the core document fields in the case: DocumentDate, DocumentDescription, DocumentTitle, DocumentType, EstimatedDate.
    • value: String determines which of the following values is populated in the document field.
      • DocumentDate is the Document Date of the document. Format is YYYY-MM-DD.
      • DocumentDescription is the Document Description of the document.
      • DocumentTitle is the Document Title of the document.
      • DocumentType is the Document Type of the document.
      • EstimatedDate is the Estimated Date of the document. A Boolean value.
    • action: CoreAction! determines which of the following actions to take on the incoming field data:
      • Update inserts or updates the value(s) of the field.
      • Delete removes coding from the document for the field.
      • Ignore ignores the value.
  • fields: [FieldParams] imports the following data into fields in the case:
    • name: String! Is the name of the field. If the field exists, the existing field will be used. If not, the name is created with the field type indicated.
    • onetomany: Boolean defines whether the field is one to many.
    • type: FieldType! is the field type. The possible values are as follows:
      • Boolean allows you to set the value as Yes or No.
      • DateTime allows you to set the value in YYYY-MM-DD format.
      • Memo
      • Number
      • PickList
      • Text
    • action: Action! determines which of the following actions to take on the incoming data:
      • Append appends the value(s) to the field (only for one-to-many field types).
      • Delete removes coding from the document for the field.
      • InsertUpdate inserts or updates the value(s) of the field.
    • values: [String]! imports the value(s) for the field.
  • correspondence: [CorrespondenceType] imports the following people and organization values for the document:
    • type: String! determines the correspondence type. Possible values are To, From, CC, or BCC.
    • people: [String] contains a list of people values.
    • orgs: [String] contains a list of organization values.
    • action: Action! determines which of the following actions to take on the incoming field data:
      • Append appends the value(s) to the field (only for one to many field types).
      • Delete removes coding from the document for the field.
      • InsertUpdate inserts or updates the value(s) of the field.
  • pages: [PagesParams]imports the following values for the pages associated with the document:
    • pagenumber: Int! is the page number.
    • pagelabel: String is the page label of the page.
    • path: String! is the location of the physical file to upload.
  • contentfiles: [ContentFileParams] imports the list of content files for the document.
    • path: String! imports the location of the physical file to upload.

Submit an import job

After adding documents to a job using the addDocumentsForImportJob mutation, you can run the import job using the submitImportJob mutation.

Sample mutation:

mutation {
  submitImportJob (
    caseId:26,
    rdxJobId:325
  )
  {
    rpfJobId
  }
}

Sample response:

{
  “data”: {
    “submitImportJob”: {
      “rpfJobId”: 11805
    }
  }
}