Worker is listed in the removed executors list with its state as KILLED.
However this is an output file being produced, it is not clear yet if this is a real problem.
Apr 14, 2015 6:56:45 PM INFO: parquet.hadoop.codec.CodecConfig: Compression: GZIP
Apr 14, 2015 6:56:45 PM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728
Apr 14, 2015 6:56:45 PM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576
Apr 14, 2015 6:56:45 PM INFO: parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576
Apr 14, 2015 6:56:45 PM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is on
Apr 14, 2015 6:56:45 PM INFO: parquet.hadoop.ParquetOutputFormat: Validation is off
Apr 14, 2015 6:56:45 PM INFO: parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0
Apr 14, 2015 6:56:59 PM INFO: parquet.hadoop.InternalParquetRecordWriter: Flushing mem store to file. allocated memory: 96,056,480
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 41,000B for [contig, contigName] BINARY: 308,846 values, 77,721B raw, 40,977B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 8B raw, 1B comp}
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 41,000B for [contig, contigLength] INT64: 308,846 values, 77,721B raw, 40,977B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 8B raw, 1B comp}
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 40,646B for [contig, contigMD5] BINARY: 308,846 values, 77,717B raw, 40,623B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 40,646B for [contig, referenceURL] BINARY: 308,846 values, 77,717B raw, 40,623B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 40,646B for [contig, assembly] BINARY: 308,846 values, 77,717B raw, 40,623B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 40,646B for [contig, species] BINARY: 308,846 values, 77,717B raw, 40,623B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 338,355B for [start] INT64: 308,846 values, 596,823B raw, 338,332B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 47B for [oldPosition] INT64: 308,846 values, 8B raw, 28B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 338,429B for [end] INT64: 308,846 values, 596,823B raw, 338,406B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 81,665B for [mapq] INT32: 308,846 values, 105,817B raw, 81,642B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 71 entries, 284B raw, 71B comp}
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 903,697B for [readName] BINARY: 308,846 values, 6,544,586B raw, 903,530B comp, 7 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 9,287,498B for [sequence] BINARY: 308,846 values, 32,120,231B raw, 9,286,756B comp, 31 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 10,872,009B for [qual] BINARY: 308,846 values, 32,120,231B raw, 10,871,267B comp, 31 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 141,052B for [cigar] BINARY: 308,846 values, 452,474B raw, 141,006B comp, 2 pages, encodings: [RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 13,084 entries, 255,861B raw, 13,084B comp}
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 47B for [oldCigar] BINARY: 308,846 values, 8B raw, 28B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 97B for [basesTrimmedFromStart] INT32: 308,846 values, 24B raw, 59B comp, 2 pages, encodings: [RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 4B raw, 1B comp}
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 97B for [basesTrimmedFromEnd] INT32: 308,846 values, 24B raw, 59B comp, 2 pages, encodings: [RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 4B raw, 1B comp}
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 104B for [readPaired] BOOLEAN: 308,846 values, 38,614B raw, 82B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 104B for [properPair] BOOLEAN: 308,846 values, 38,614B raw, 82B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 31,185B for [readMapped] BOOLEAN: 308,846 values, 38,614B raw, 31,162B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 104B for [mateMapped] BOOLEAN: 308,846 values, 38,614B raw, 82B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 104B for [firstOfPair] BOOLEAN: 308,846 values, 38,614B raw, 82B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 104B for [secondOfPair] BOOLEAN: 308,846 values, 38,614B raw, 82B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 104B for [failedVendorQualityChecks] BOOLEAN: 308,846 values, 38,614B raw, 82B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 104B for [duplicateRead] BOOLEAN: 308,846 values, 38,614B raw, 82B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 22,508B for [readNegativeStrand] BOOLEAN: 308,846 values, 38,614B raw, 22,485B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 104B for [mateNegativeStrand] BOOLEAN: 308,846 values, 38,614B raw, 82B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 31,185B for [primaryAlignment] BOOLEAN: 308,846 values, 38,614B raw, 31,162B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 104B for [secondaryAlignment] BOOLEAN: 308,846 values, 38,614B raw, 82B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 104B for [supplementaryAlignment] BOOLEAN: 308,846 values, 38,614B raw, 82B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 47B for [mismatchingPositions] BINARY: 308,846 values, 8B raw, 28B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 47B for [origQual] BINARY: 308,846 values, 8B raw, 28B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 82,289B for [attributes] BINARY: 308,846 values, 142,604B raw, 81,848B comp, 21 pages, encodings: [RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 16 entries, 1,110B raw, 16B comp}
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 147B for [recordGroupName] BINARY: 308,846 values, 36B raw, 90B comp, 3 pages, encodings: [RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 9B raw, 1B comp}
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 47B for [recordGroupSequencingCenter] BINARY: 308,846 values, 8B raw, 28B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 47B for [recordGroupDescription] BINARY: 308,846 values, 8B raw, 28B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 47B for [recordGroupRunDateEpoch] INT64: 308,846 values, 8B raw, 28B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 47B for [recordGroupFlowOrder] BINARY: 308,846 values, 8B raw, 28B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 47B for [recordGroupKeySequence] BINARY: 308,846 values, 8B raw, 28B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 97B for [recordGroupLibrary] BINARY: 308,846 values, 24B raw, 59B comp, 2 pages, encodings: [RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 6B raw, 1B comp}
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 47B for [recordGroupPredictedMedianInsertSize] INT32: 308,846 values, 8B raw, 28B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 196B for [recordGroupPlatform] BINARY: 308,846 values, 48B raw, 120B comp, 4 pages, encodings: [RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 12B raw, 1B comp}
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 97B for [recordGroupPlatformUnit] BINARY: 308,846 values, 24B raw, 59B comp, 2 pages, encodings: [RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 6B raw, 1B comp}
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 97B for [recordGroupSample] BINARY: 308,846 values, 24B raw, 59B comp, 2 pages, encodings: [RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 6B raw, 1B comp}
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 47B for [mateAlignmentStart] INT64: 308,846 values, 8B raw, 28B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 47B for [mateAlignmentEnd] INT64: 308,846 values, 8B raw, 28B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 47B for [mateContig, contigName] BINARY: 308,846 values, 8B raw, 28B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 47B for [mateContig, contigLength] INT64: 308,846 values, 8B raw, 28B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 47B for [mateContig, contigMD5] BINARY: 308,846 values, 8B raw, 28B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 47B for [mateContig, referenceURL] BINARY: 308,846 values, 8B raw, 28B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 47B for [mateContig, assembly] BINARY: 308,846 values, 8B raw, 28B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Apr 14, 2015 6:57:00 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 47B for [mateContig, species] BINARY: 308,846 values, 8B raw, 28B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Consequence
Worker is listed in the removed executors list with its state as KILLED.
However this is an output file being produced, it is not clear yet if this is a real problem.
Steps to reproduce
STDERR
STDOUT