Autopsy  4.15.0
Graphical digital forensics platform for The Sleuth Kit and other tools.
Classes | Private Member Functions | Static Private Member Functions | Private Attributes | Static Private Attributes | List of all members
org.sleuthkit.autopsy.ingest.DataSourceIngestJob Class Reference


class  Snapshot
enum  Stages

Private Member Functions

void addIngestModules (List< IngestModuleTemplate > templates, IngestModuleType type, SleuthkitCase skCase) throws TskCoreException
void checkForStageCompleted ()
void createIngestPipelines ()
void finish ()
void finishFirstStage ()
boolean hasFileIngestPipeline ()
boolean hasFirstStageDataSourceIngestPipeline ()
boolean hasSecondStageDataSourceIngestPipeline ()
void logErrorMessage (Level level, String message, Throwable throwable)
void logErrorMessage (Level level, String message)
void logInfoMessage (String message)
void logIngestModuleErrors (List< IngestModuleError > errors)
void startDataSourceIngestProgressBar ()
void startFileIngestProgressBar ()
void startFirstStage ()
void startSecondStage ()
List< IngestModuleErrorstartUpIngestPipelines ()

Static Private Member Functions

static void addModule (Map< String, IngestModuleTemplate > mapping, Map< String, IngestModuleTemplate > jythonMapping, IngestModuleTemplate template)
static void addOrdered (final List< IngestModuleTemplate > dest, final Map< String, IngestModuleTemplate > src, final Map< String, IngestModuleTemplate > jythonSrc)
static List< IngestModuleTemplategetConfiguredIngestModuleTemplates (Map< String, IngestModuleTemplate > ingestModuleTemplates, Map< String, IngestModuleTemplate > jythonIngestModuleTemplates, List< String > pipelineConfig)
static String getJythonName (String canonicalName)

Private Attributes

volatile IngestJob.CancellationReason cancellationReason = IngestJob.CancellationReason.NOT_CANCELLED
volatile boolean cancelled
final List< String > cancelledDataSourceIngestModules = new CopyOnWriteArrayList<>()
final long createTime
volatile boolean currentDataSourceIngestModuleCancelled
DataSourceIngestPipeline currentDataSourceIngestPipeline
String currentFileIngestModule = ""
String currentFileIngestTask = ""
final Content dataSource
final Object dataSourceIngestPipelineLock = new Object()
ProgressHandle dataSourceIngestProgress
final Object dataSourceIngestProgressLock = new Object()
final boolean doUI
long estimatedFilesToProcess
final List< FileIngestPipeline > fileIngestPipelines = new ArrayList<>()
final LinkedBlockingQueue< FileIngestPipeline > fileIngestPipelinesQueue = new LinkedBlockingQueue<>()
ProgressHandle fileIngestProgress
final Object fileIngestProgressLock = new Object()
final List< AbstractFile > files = new ArrayList<>()
final List< String > filesInProgress = new ArrayList<>()
DataSourceIngestPipeline firstStageDataSourceIngestPipeline
final long id
volatile IngestJobInfo ingestJob
final List< IngestModuleInfo > ingestModules = new ArrayList<>()
final IngestJob parentJob
long processedFiles
DataSourceIngestPipeline secondStageDataSourceIngestPipeline
final IngestJobSettings settings
volatile Stages stage = DataSourceIngestJob.Stages.INITIALIZATION
final Object stageCompletionCheckLock = new Object()

Static Private Attributes

static String AUTOPSY_MODULE_PREFIX = "org.sleuthkit.autopsy"
static final Pattern JYTHON_REGEX = Pattern.compile("org\\.python\\.proxies\\.(.+?)\\$(.+?)(\\$[0-9]*)?$")
static final Logger logger = Logger.getLogger(DataSourceIngestJob.class.getName())
static final AtomicLong nextJobId = new AtomicLong(0L)
static final IngestTasksScheduler taskScheduler = IngestTasksScheduler.getInstance()

Detailed Description

Encapsulates a data source and the ingest module pipelines used to process it.

Definition at line 62 of file

Member Function Documentation

void org.sleuthkit.autopsy.ingest.DataSourceIngestJob.addIngestModules ( List< IngestModuleTemplate templates,
IngestModuleType  type,
SleuthkitCase  skCase 
) throws TskCoreException
static void org.sleuthkit.autopsy.ingest.DataSourceIngestJob.addModule ( Map< String, IngestModuleTemplate mapping,
Map< String, IngestModuleTemplate jythonMapping,
IngestModuleTemplate  template 

Adds a template to the appropriate map. If the class is a jython class, then it is added to the jython map. Otherwise, it is added to the mapping.

mappingMapping for non-jython objects.
jythonMappingMapping for jython objects.
templateThe template to add.

Definition at line 283 of file

References org.sleuthkit.autopsy.ingest.DataSourceIngestJob.getJythonName().

Referenced by org.sleuthkit.autopsy.ingest.DataSourceIngestJob.createIngestPipelines().

static void org.sleuthkit.autopsy.ingest.DataSourceIngestJob.addOrdered ( final List< IngestModuleTemplate dest,
final Map< String, IngestModuleTemplate src,
final Map< String, IngestModuleTemplate jythonSrc 

Adds ingest modules to a list with autopsy modules first and third party modules next.

destThe destination for the modules to be added.
srcA map of fully qualified class name mapped to the IngestModuleTemplate.
jythonSrcA map of fully qualified class name mapped to the IngestModuleTemplate for jython modules.

Definition at line 237 of file

Referenced by org.sleuthkit.autopsy.ingest.DataSourceIngestJob.createIngestPipelines().

void org.sleuthkit.autopsy.ingest.DataSourceIngestJob.checkForStageCompleted ( )
void org.sleuthkit.autopsy.ingest.DataSourceIngestJob.createIngestPipelines ( )

Creates the file and data source ingest pipelines.

Make mappings of ingest module factory class names to templates.

Use the mappings and the ingest pipelines configuration to create ordered lists of ingest module templates for each ingest pipeline.

Add any module templates that were not specified in the pipelines configuration to an appropriate pipeline - either the first stage data source ingest pipeline or the file ingest pipeline.

Construct the data source ingest pipelines.

Construct the file ingest pipelines, one per file ingest thread.

The current thread was interrupted while blocked on a full queue. Blocking should actually never happen here, but reset the interrupted flag rather than just swallowing the exception.

Definition at line 298 of file

References org.sleuthkit.autopsy.ingest.DataSourceIngestJob.addIngestModules(), org.sleuthkit.autopsy.ingest.DataSourceIngestJob.addModule(), org.sleuthkit.autopsy.ingest.DataSourceIngestJob.addOrdered(), org.sleuthkit.autopsy.ingest.DataSourceIngestJob.getConfiguredIngestModuleTemplates(), org.sleuthkit.autopsy.casemodule.Case.getCurrentCaseThrows(), org.sleuthkit.autopsy.ingest.IngestJobSettings.getEnabledIngestModuleTemplates(), org.sleuthkit.autopsy.ingest.IngestManager.getInstance(), org.sleuthkit.autopsy.ingest.IngestManager.getNumberOfFileIngestThreads(), org.sleuthkit.autopsy.casemodule.Case.getSleuthkitCase(), and org.sleuthkit.autopsy.ingest.DataSourceIngestJob.logErrorMessage().

void org.sleuthkit.autopsy.ingest.DataSourceIngestJob.finish ( )
void org.sleuthkit.autopsy.ingest.DataSourceIngestJob.finishFirstStage ( )
static List<IngestModuleTemplate> org.sleuthkit.autopsy.ingest.DataSourceIngestJob.getConfiguredIngestModuleTemplates ( Map< String, IngestModuleTemplate ingestModuleTemplates,
Map< String, IngestModuleTemplate jythonIngestModuleTemplates,
List< String >  pipelineConfig 

Uses an input collection of ingest module templates and a pipeline configuration, i.e., an ordered list of ingest module factory class names, to create an ordered output list of ingest module templates for an ingest pipeline. The ingest module templates are removed from the input collection as they are added to the output collection.

ingestModuleTemplatesA mapping of ingest module factory class names to ingest module templates.
jythonIngestModuleTemplatesA mapping of jython processed class names to jython ingest module templates.
pipelineConfigAn ordered list of ingest module factory class names representing an ingest pipeline.
An ordered list of ingest module templates, i.e., an uninstantiated pipeline.

Definition at line 403 of file

Referenced by org.sleuthkit.autopsy.ingest.DataSourceIngestJob.createIngestPipelines().

static String org.sleuthkit.autopsy.ingest.DataSourceIngestJob.getJythonName ( String  canonicalName)

Takes a classname like "org.python.proxies.GPX_Parser_Module$GPXParserFileIngestModuleFactory$14" and provides "GPX_Parser_Module.GPXParserFileIngestModuleFactory" or null if not in jython package.

canonicalNameThe canonical name.
The jython name or null if not in jython package.

Definition at line 265 of file

Referenced by org.sleuthkit.autopsy.ingest.DataSourceIngestJob.addModule().

boolean org.sleuthkit.autopsy.ingest.DataSourceIngestJob.hasFileIngestPipeline ( )

Checks to see if this job has a file level ingest pipeline.

True or false.

Definition at line 498 of file

Referenced by org.sleuthkit.autopsy.ingest.DataSourceIngestJob.startFirstStage().

boolean org.sleuthkit.autopsy.ingest.DataSourceIngestJob.hasFirstStageDataSourceIngestPipeline ( )

Checks to see if this job has a first stage data source level ingest pipeline.

True or false.

Definition at line 479 of file

Referenced by org.sleuthkit.autopsy.ingest.DataSourceIngestJob.startFirstStage().

boolean org.sleuthkit.autopsy.ingest.DataSourceIngestJob.hasSecondStageDataSourceIngestPipeline ( )

Checks to see if this job has a second stage data source level ingest pipeline.

True or false.

Definition at line 489 of file

Referenced by org.sleuthkit.autopsy.ingest.DataSourceIngestJob.finishFirstStage().

void org.sleuthkit.autopsy.ingest.DataSourceIngestJob.logErrorMessage ( Level  level,
String  message,
Throwable  throwable 

Writes an error message to the application log that includes the data source name, data source object id, and the job id.

levelThe logging level for the message.
messageThe message.
throwableThe throwable associated with the error.

Definition at line 1179 of file


Referenced by org.sleuthkit.autopsy.ingest.DataSourceIngestJob.createIngestPipelines(), org.sleuthkit.autopsy.ingest.DataSourceIngestJob.finish(), and org.sleuthkit.autopsy.ingest.DataSourceIngestJob.logIngestModuleErrors().

void org.sleuthkit.autopsy.ingest.DataSourceIngestJob.logErrorMessage ( Level  level,
String  message 

Writes an error message to the application log that includes the data source name, data source object id, and the job id.

levelThe logging level for the message.
messageThe message.

Definition at line 1190 of file


void org.sleuthkit.autopsy.ingest.DataSourceIngestJob.logInfoMessage ( String  message)
void org.sleuthkit.autopsy.ingest.DataSourceIngestJob.logIngestModuleErrors ( List< IngestModuleError errors)
void org.sleuthkit.autopsy.ingest.DataSourceIngestJob.startDataSourceIngestProgressBar ( )
void org.sleuthkit.autopsy.ingest.DataSourceIngestJob.startFileIngestProgressBar ( )
void org.sleuthkit.autopsy.ingest.DataSourceIngestJob.startFirstStage ( )

Starts the first stage of this job.

Start one or both of the first stage ingest progress bars.

Make the first stage data source level ingest pipeline the current data source level pipeline.

Schedule the first stage tasks.

No data source ingest task has been scheduled for this stage, and it is possible, if unlikely, that no file ingest tasks were actually scheduled since there are files that get filtered out by the tasks scheduler. In this special case, an ingest thread will never get to check for completion of this stage of the job, so do it now.

Definition at line 583 of file

References org.sleuthkit.autopsy.ingest.DataSourceIngestJob.checkForStageCompleted(), org.sleuthkit.autopsy.ingest.DataSourceIngestJob.dataSourceIngestPipelineLock, org.sleuthkit.autopsy.ingest.DataSourceIngestJob.fileIngestProgressLock, org.sleuthkit.autopsy.ingest.DataSourceIngestJob.Stages.FIRST, org.sleuthkit.autopsy.ingest.DataSourceIngestJob.firstStageDataSourceIngestPipeline, org.sleuthkit.autopsy.ingest.DataSourceIngestJob.hasFileIngestPipeline(), org.sleuthkit.autopsy.ingest.DataSourceIngestJob.hasFirstStageDataSourceIngestPipeline(), org.sleuthkit.autopsy.ingest.DataSourceIngestJob.logInfoMessage(), org.sleuthkit.autopsy.ingest.DataSourceIngestJob.startDataSourceIngestProgressBar(), org.sleuthkit.autopsy.ingest.DataSourceIngestJob.startFileIngestProgressBar(), and org.sleuthkit.autopsy.ingest.DataSourceIngestJob.taskScheduler.

void org.sleuthkit.autopsy.ingest.DataSourceIngestJob.startSecondStage ( )
List<IngestModuleError> org.sleuthkit.autopsy.ingest.DataSourceIngestJob.startUpIngestPipelines ( )

Starts up each of the ingest pipelines for this job to collect any file and data source level ingest modules errors that might occur.

A collection of ingest module startup errors, empty on success.

Definition at line 535 of file

References org.sleuthkit.autopsy.ingest.DataSourceIngestJob.logIngestModuleErrors().

Member Data Documentation

String org.sleuthkit.autopsy.ingest.DataSourceIngestJob.AUTOPSY_MODULE_PREFIX = "org.sleuthkit.autopsy"

Definition at line 64 of file

volatile IngestJob.CancellationReason org.sleuthkit.autopsy.ingest.DataSourceIngestJob.cancellationReason = IngestJob.CancellationReason.NOT_CANCELLED

Definition at line 148 of file

volatile boolean org.sleuthkit.autopsy.ingest.DataSourceIngestJob.cancelled

Definition at line 147 of file

final List<String> org.sleuthkit.autopsy.ingest.DataSourceIngestJob.cancelledDataSourceIngestModules = new CopyOnWriteArrayList<>()

Definition at line 146 of file

final long org.sleuthkit.autopsy.ingest.DataSourceIngestJob.createTime

A data source ingest job uses this field to report its creation time.

Definition at line 186 of file

volatile boolean org.sleuthkit.autopsy.ingest.DataSourceIngestJob.currentDataSourceIngestModuleCancelled

A data source ingest job supports cancellation of either the currently running data source level ingest module or the entire ingest job.

TODO: The currentDataSourceIngestModuleCancelled field and all of the code concerned with it is a hack to avoid an API change. The next time an API change is legal, a cancel() method needs to be added to the IngestModule interface and this field should be removed. The "ingest job is canceled" queries should also be removed from the IngestJobContext class.

Definition at line 145 of file

DataSourceIngestPipeline org.sleuthkit.autopsy.ingest.DataSourceIngestJob.currentDataSourceIngestPipeline

Definition at line 122 of file

String org.sleuthkit.autopsy.ingest.DataSourceIngestJob.currentFileIngestModule = ""

Definition at line 178 of file

String org.sleuthkit.autopsy.ingest.DataSourceIngestJob.currentFileIngestTask = ""

Definition at line 179 of file

final Content org.sleuthkit.autopsy.ingest.DataSourceIngestJob.dataSource

Definition at line 81 of file

final Object org.sleuthkit.autopsy.ingest.DataSourceIngestJob.dataSourceIngestPipelineLock = new Object()

A data source ingest job has separate data source level ingest module pipelines for the first and second processing stages. Longer running, lower priority modules belong in the second stage pipeline, although this cannot be enforced. Note that the pipelines for both stages are created at job start up to allow for verification that they both can be started up without errors.

Definition at line 119 of file

Referenced by org.sleuthkit.autopsy.ingest.DataSourceIngestJob.startFirstStage(), and org.sleuthkit.autopsy.ingest.DataSourceIngestJob.startSecondStage().

ProgressHandle org.sleuthkit.autopsy.ingest.DataSourceIngestJob.dataSourceIngestProgress

Definition at line 167 of file

final Object org.sleuthkit.autopsy.ingest.DataSourceIngestJob.dataSourceIngestProgressLock = new Object()
final boolean org.sleuthkit.autopsy.ingest.DataSourceIngestJob.doUI

A data source ingest job can run interactively using NetBeans progress handles.

Definition at line 160 of file

long org.sleuthkit.autopsy.ingest.DataSourceIngestJob.estimatedFilesToProcess

Definition at line 175 of file

final List<FileIngestPipeline> org.sleuthkit.autopsy.ingest.DataSourceIngestJob.fileIngestPipelines = new ArrayList<>()

Definition at line 132 of file

final LinkedBlockingQueue<FileIngestPipeline> org.sleuthkit.autopsy.ingest.DataSourceIngestJob.fileIngestPipelinesQueue = new LinkedBlockingQueue<>()

A data source ingest job has a collection of identical file level ingest module pipelines, one for each file level ingest thread in the ingest manager. A blocking queue is used to dole out the pipelines to the threads and an ordinary list is used when the ingest job needs to access the pipelines to query their status.

Definition at line 131 of file

ProgressHandle org.sleuthkit.autopsy.ingest.DataSourceIngestJob.fileIngestProgress

Definition at line 177 of file

final Object org.sleuthkit.autopsy.ingest.DataSourceIngestJob.fileIngestProgressLock = new Object()
final List<AbstractFile> org.sleuthkit.autopsy.ingest.DataSourceIngestJob.files = new ArrayList<>()

Definition at line 82 of file

final List<String> org.sleuthkit.autopsy.ingest.DataSourceIngestJob.filesInProgress = new ArrayList<>()

Definition at line 174 of file

DataSourceIngestPipeline org.sleuthkit.autopsy.ingest.DataSourceIngestJob.firstStageDataSourceIngestPipeline
final long
volatile IngestJobInfo org.sleuthkit.autopsy.ingest.DataSourceIngestJob.ingestJob

Definition at line 181 of file

final List<IngestModuleInfo> org.sleuthkit.autopsy.ingest.DataSourceIngestJob.ingestModules = new ArrayList<>()

Definition at line 180 of file

final Pattern org.sleuthkit.autopsy.ingest.DataSourceIngestJob.JYTHON_REGEX = Pattern.compile("org\\.python\\.proxies\\.(.+?)\\$(.+?)(\\$[0-9]*)?$")

Definition at line 69 of file

final Logger org.sleuthkit.autopsy.ingest.DataSourceIngestJob.logger = Logger.getLogger(DataSourceIngestJob.class.getName())

Definition at line 66 of file

final AtomicLong org.sleuthkit.autopsy.ingest.DataSourceIngestJob.nextJobId = new AtomicLong(0L)

Definition at line 78 of file

final IngestJob org.sleuthkit.autopsy.ingest.DataSourceIngestJob.parentJob

These fields define a data source ingest job: the parent ingest job, an ID, the user's ingest job settings, and the data source to be analyzed. Optionally, there is a set of files to be analyzed instead of analyzing all of the files in the data source.

Definition at line 77 of file

long org.sleuthkit.autopsy.ingest.DataSourceIngestJob.processedFiles

Definition at line 176 of file

DataSourceIngestPipeline org.sleuthkit.autopsy.ingest.DataSourceIngestJob.secondStageDataSourceIngestPipeline
final IngestJobSettings org.sleuthkit.autopsy.ingest.DataSourceIngestJob.settings

Definition at line 80 of file

volatile Stages org.sleuthkit.autopsy.ingest.DataSourceIngestJob.stage = DataSourceIngestJob.Stages.INITIALIZATION

Definition at line 108 of file

final Object org.sleuthkit.autopsy.ingest.DataSourceIngestJob.stageCompletionCheckLock = new Object()
final IngestTasksScheduler org.sleuthkit.autopsy.ingest.DataSourceIngestJob.taskScheduler = IngestTasksScheduler.getInstance()

A data source ingest job uses the task scheduler singleton to create and queue the ingest tasks that make up the job.

Definition at line 154 of file

Referenced by org.sleuthkit.autopsy.ingest.DataSourceIngestJob.checkForStageCompleted(), org.sleuthkit.autopsy.ingest.DataSourceIngestJob.startFirstStage(), and org.sleuthkit.autopsy.ingest.DataSourceIngestJob.startSecondStage().

The documentation for this class was generated from the following file:

Copyright © 2012-2020 Basis Technology. Generated on: Mon Jul 6 2020
This work is licensed under a Creative Commons Attribution-Share Alike 3.0 United States License.