Skip to content

Package geofetch Documentation

Package Overview

The geofetch package provides tools for downloading metadata and data from Gene Expression Omnibus (GEO) and Sequence Read Archive (SRA). It can convert GEO/SRA metadata into PEP format for easy integration with other PEPkit tools.

Key Features

  • GEO/SRA Download: Fetch metadata and raw data from NCBI repositories
  • PEP Generation: Automatically create PEP-formatted project configs
  • Flexible Filtering: Search and filter GEO datasets by date and criteria
  • SRA Integration: Download and convert SRA data to FASTQ format
  • Processed Data: Download processed data matrices from GEO

Installation

pip install geofetch

Quick Example

from geofetch import Geofetcher

# Initialize geofetcher
gf = Geofetcher()

# Fetch a GEO series
gf.fetch_all(input="GSE####", name="my_project")

API Reference

Geofetcher Class

The main class for fetching data from GEO/SRA:

Geofetcher

Geofetcher(name='', metadata_root='', metadata_folder='', just_metadata=False, refresh_metadata=False, config_template=None, pipeline_samples=None, pipeline_project=None, skip=0, acc_anno=False, use_key_subset=False, processed=False, data_source='samples', filter=None, filter_size=None, geo_folder='.', split_experiments=False, bam_folder='', fq_folder='', sra_folder='', bam_conversion=False, picard_path='', input=None, const_limit_project=50, const_limit_discard=1000, attr_limit_truncate=500, max_soft_size='1GB', discard_soft=False, add_dotfile=False, disable_progressbar=False, add_convert_modifier=False, opts=None, max_prefetch_size=None, **kwargs)

Class to download or get projects, metadata, data from GEO and SRA.

Constructor.

Parameters:

Name Type Description Default
input str | None

GSE number or path to the input file.

None
name str

Specify a project name. Defaults to GSE number or name of accessions file name.

''
metadata_root str

Specify a parent folder location to store metadata. The project name will be added as a subfolder (Default: $SRAMETA:).

''
metadata_folder str

Specify an absolute folder location to store metadata. No subfolder will be added. Overrides value of --metadata-root (Default: Not used (--metadata-root is used by default)).

''
just_metadata bool

If set, don't actually run downloads, just create metadata.

False
refresh_metadata bool

If set, re-download metadata even if it exists.

False
config_template str | None

Project config yaml file template.

None
pipeline_samples str | None

Specify one or more filepaths to SAMPLES pipeline interface yaml files. These will be added to the project config file to make it immediately compatible with looper. (Default: null).

None
pipeline_project str | None

Specify one or more filepaths to PROJECT pipeline interface yaml files. These will be added to the project config file to make it immediately compatible with looper. (Default: null).

None
acc_anno bool

Produce annotation sheets for each accession. Project combined PEP for the whole project won't be produced.

False
discard_soft bool

Create project without downloading soft files on the disc.

False
add_dotfile bool

Add .pep.yaml file that points .yaml PEP file.

False
disable_progressbar bool

Set true to disable progressbar.

False
const_limit_project int

Optional: Limit of the number of the constant sample characters that should not be in project yaml. (Default: 50).

50
const_limit_discard int

Optional: Limit of the number of the constant sample characters that should not be discarded (Default: 250).

1000
attr_limit_truncate int

Optional: Limit of the number of sample characters. Any attribute with more than X characters will truncate to the first X, where X is a number of characters (Default: 500).

500
max_soft_size str

Optional: Max size of soft file. Supported input formats: 12B, 12KB, 12MB, 12GB. [Default value: 1GB].

'1GB'
processed bool

Download processed data (Default: download raw data).

False
data_source str

Specifies the source of data on the GEO record to retrieve processed data, which may be attached to the collective series entity, or to individual samples. Allowable values are: samples, series or both (all). Ignored unless 'processed' flag is set. (Default: samples).

'samples'
filter str | None

Filter regex for processed filenames (Default: None). Ignored unless 'processed' flag is set.

None
filter_size str | None

Filter size for processed files that are stored as sample repository (Default: None). Works only for sample data. Supported input formats: 12B, 12KB, 12MB, 12GB. Ignored unless 'processed' flag is set.

None
geo_folder str

Specify a location to store processed GEO files. Ignored unless 'processed' flag is set. (Default: $GEODATA:).

'.'
split_experiments bool

Split SRR runs into individual samples. By default, SRX experiments with multiple SRR Runs will have a single entry in the annotation table, with each run as a separate row in the subannotation table. This setting instead treats each run as a separate sample [Works with raw data].

False
bam_folder str

Optional: Specify folder of bam files. Geofetch will not download sra files when corresponding bam files already exist. (Default: $SRABAM:) [Works with raw data].

''
fq_folder str

Optional: Specify folder of fastq files. Geofetch will not download sra files when corresponding fastq files already exist. (Default: $SRAFQ:) [Works with raw data].

''
use_key_subset bool

Use just the keys defined in this module when writing out metadata. [Works with raw data].

False
sra_folder str

Optional: Specify a location to store sra files.

''
bam_conversion bool

Optional: set True to convert bam files [Works with raw data].

False
picard_path str

Specify a path to the picard jar, if you want to convert fastq to bam [Works with raw data].

''
add_convert_modifier bool

Add looper SRA convert modifier to config file.

False
skip int

Skip some accessions. (Default: no skip).

0
opts object | None

opts object [Optional].

None
max_prefetch_size str | int | None

Argument to prefetch command's --max-size option.

None
kwargs object

Other values.

{}
Source code in geofetch/geofetch.py
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
def __init__(
    self,
    name: str = "",
    metadata_root: str = "",
    metadata_folder: str = "",
    just_metadata: bool = False,
    refresh_metadata: bool = False,
    config_template: str | None = None,
    pipeline_samples: str | None = None,
    pipeline_project: str | None = None,
    skip: int = 0,
    acc_anno: bool = False,
    use_key_subset: bool = False,
    processed: bool = False,
    data_source: str = "samples",
    filter: str | None = None,
    filter_size: str | None = None,
    geo_folder: str = ".",
    split_experiments: bool = False,
    bam_folder: str = "",
    fq_folder: str = "",
    sra_folder: str = "",
    bam_conversion: bool = False,
    picard_path: str = "",
    input: str | None = None,
    const_limit_project: int = 50,
    const_limit_discard: int = 1000,
    attr_limit_truncate: int = 500,
    max_soft_size: str = "1GB",
    discard_soft: bool = False,
    add_dotfile: bool = False,
    disable_progressbar: bool = False,
    add_convert_modifier: bool = False,
    opts: object | None = None,
    max_prefetch_size: str | int | None = None,
    **kwargs: object,
) -> None:
    """
    Constructor.

    Args:
        input: GSE number or path to the input file.
        name: Specify a project name. Defaults to GSE number or name of accessions file name.
        metadata_root: Specify a parent folder location to store metadata.
            The project name will be added as a subfolder (Default: $SRAMETA:).
        metadata_folder: Specify an absolute folder location to store metadata. No subfolder will be added.
            Overrides value of --metadata-root (Default: Not used (--metadata-root is used by default)).
        just_metadata: If set, don't actually run downloads, just create metadata.
        refresh_metadata: If set, re-download metadata even if it exists.
        config_template: Project config yaml file template.
        pipeline_samples: Specify one or more filepaths to SAMPLES pipeline interface yaml files.
            These will be added to the project config file to make it immediately compatible with looper.
            (Default: null).
        pipeline_project: Specify one or more filepaths to PROJECT pipeline interface yaml files.
            These will be added to the project config file to make it immediately compatible with looper.
            (Default: null).
        acc_anno: Produce annotation sheets for each accession.
            Project combined PEP for the whole project won't be produced.
        discard_soft: Create project without downloading soft files on the disc.
        add_dotfile: Add .pep.yaml file that points .yaml PEP file.
        disable_progressbar: Set true to disable progressbar.
        const_limit_project: Optional: Limit of the number of the constant sample characters
            that should not be in project yaml. (Default: 50).
        const_limit_discard: Optional: Limit of the number of the constant sample characters
            that should not be discarded (Default: 250).
        attr_limit_truncate: Optional: Limit of the number of sample characters.
            Any attribute with more than X characters will truncate to the first X, where X is a number of characters
            (Default: 500).
        max_soft_size: Optional: Max size of soft file.
            Supported input formats: 12B, 12KB, 12MB, 12GB. [Default value: 1GB].
        processed: Download processed data (Default: download raw data).
        data_source: Specifies the source of data on the GEO record to retrieve processed data,
            which may be attached to the collective series entity, or to individual samples. Allowable values are:
            samples, series or both (all). Ignored unless 'processed' flag is set. (Default: samples).
        filter: Filter regex for processed filenames (Default: None). Ignored unless 'processed' flag is set.
        filter_size: Filter size for processed files that are stored as sample repository (Default: None).
            Works only for sample data. Supported input formats: 12B, 12KB, 12MB, 12GB.
            Ignored unless 'processed' flag is set.
        geo_folder: Specify a location to store processed GEO files.
            Ignored unless 'processed' flag is set. (Default: $GEODATA:).
        split_experiments: Split SRR runs into individual samples. By default, SRX experiments with multiple SRR
            Runs will have a single entry in the annotation table, with each run as a separate row in the
            subannotation table. This setting instead treats each run as a separate sample [Works with raw data].
        bam_folder: Optional: Specify folder of bam files. Geofetch will not download sra files when
            corresponding bam files already exist. (Default: $SRABAM:) [Works with raw data].
        fq_folder: Optional: Specify folder of fastq files. Geofetch will not download sra files when corresponding
            fastq files already exist. (Default: $SRAFQ:) [Works with raw data].
        use_key_subset: Use just the keys defined in this module when writing out metadata. [Works with raw data].
        sra_folder: Optional: Specify a location to store sra files.
        bam_conversion: Optional: set True to convert bam files [Works with raw data].
        picard_path: Specify a path to the picard jar, if you want to convert fastq to bam [Works with raw data].
        add_convert_modifier: Add looper SRA convert modifier to config file.
        skip: Skip some accessions. (Default: no skip).
        opts: opts object [Optional].
        max_prefetch_size: Argument to prefetch command's --max-size option.
        kwargs: Other values.
    """

    global _LOGGER
    _LOGGER = (
        logmuse.logger_via_cli(opts)
        if opts is not None
        else logging.getLogger(__name__)
    )

    if name:
        self.project_name = name
    else:
        try:
            self.project_name = os.path.splitext(os.path.basename(input))[0]
        except TypeError:
            self.project_name = "project_name"

    if metadata_folder:
        self.metadata_expanded = expandpath(metadata_folder)
        if os.path.isabs(self.metadata_expanded):
            self.metadata_root_full = metadata_folder
        else:
            self.metadata_expanded = os.path.abspath(self.metadata_expanded)
            self.metadata_root_full = os.path.abspath(metadata_root)
        self.metadata_root_full = metadata_folder
    else:
        self.metadata_expanded = expandpath(metadata_root)
        if os.path.isabs(self.metadata_expanded):
            self.metadata_root_full = metadata_root
        else:
            self.metadata_expanded = os.path.abspath(self.metadata_expanded)
            self.metadata_root_full = os.path.abspath(metadata_root)

    self.just_metadata = just_metadata
    self.refresh_metadata = refresh_metadata
    self.config_template = config_template

    # if user specified a pipeline interface path for samples, add it into the project config
    if pipeline_samples and pipeline_samples != "null":
        self.file_pipeline_samples = pipeline_samples
        self.file_pipeline_samples = (
            f"pipeline_interfaces: {self.file_pipeline_samples}"
        )
    else:
        self.file_pipeline_samples = ""

    # if user specified a pipeline interface path, add it into the project config
    if pipeline_project:
        self.file_pipeline_project = (
            f"looper:\n    pipeline_interfaces: {pipeline_project}"
        )
    else:
        self.file_pipeline_project = ""

    self.skip = skip
    self.acc_anno = acc_anno
    self.use_key_subset = use_key_subset
    self.processed = processed
    self.supp_by = data_source

    if filter:
        self.filter_re = re.compile(filter.lower())
    else:
        self.filter_re = None

        # Postpend the project name as a subfolder (only for -m option)
        self.metadata_expanded = os.path.join(
            self.metadata_expanded, self.project_name
        )
        self.metadata_root_full = os.path.join(
            self.metadata_root_full, self.project_name
        )

    if filter_size is not None:
        try:
            self.filter_size = convert_size(filter_size.lower())
        except ValueError as message:
            _LOGGER.error(message)
            raise SystemExit()
    else:
        self.filter_size = filter_size

    self.geo_folder = geo_folder
    self.split_experiments = split_experiments
    self.bam_folder = bam_folder
    self.fq_folder = fq_folder
    self.sra_folder = sra_folder
    self.bam_conversion = bam_conversion
    self.picard_path = picard_path

    self.const_limit_project = const_limit_project
    self.const_limit_discard = const_limit_discard
    self.attr_limit_truncate = attr_limit_truncate
    self.max_soft_size = convert_size(max_soft_size.lower())

    self.discard_soft = discard_soft
    self.add_dotfile = add_dotfile
    self.disable_progressbar = disable_progressbar
    self.add_convert_modifier = add_convert_modifier
    _LOGGER.info(f"Metadata folder: {self.metadata_expanded}")

    # Some sanity checks before proceeding
    if bam_conversion and not just_metadata and not _which("samtools"):
        raise SystemExit("For SAM/BAM processing, samtools should be on PATH.")

    self.just_object = False
    self.max_prefetch_size = (
        "50g" if max_prefetch_size is None else max_prefetch_size
    )

fetch_all

fetch_all(input, name=None)

Main function driver/workflow.

Searches, filters, downloads and saves data and metadata from GEO and SRA.

Parameters:

Name Type Description Default
input str

GSE or input file with GSE accessions.

required
name str | None

Name of the project.

None

Returns:

Type Description
None | Project

None or peppy Project.

Source code in geofetch/geofetch.py
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
def fetch_all(self, input: str, name: str | None = None) -> None | peppy.Project:
    """
    Main function driver/workflow.

    Searches, filters, downloads and saves data and metadata from GEO and SRA.

    Args:
        input: GSE or input file with GSE accessions.
        name: Name of the project.

    Returns:
        None or peppy Project.
    """

    if name is not None:
        self.project_name = name
    else:
        try:
            self.project_name = os.path.splitext(os.path.basename(input))[0]
        except TypeError:
            self.project_name = input

    # check to make sure prefetch is callable
    if not self.just_metadata and not self.processed:
        if not is_prefetch_callable():
            raise SystemExit(
                "To download raw data, you must first install the sratoolkit, with prefetch in your PATH. "
                "Installation instruction: http://geofetch.databio.org/en/latest/install/"
            )

    acc_GSE_list = parse_accessions(
        input, self.metadata_expanded, self.just_metadata
    )
    if len(acc_GSE_list) == 1:
        self.disable_progressbar = True
    metadata_dict_combined = {}
    subannotation_dict_combined = {}

    processed_metadata_samples = []
    processed_metadata_series = []

    acc_GSE_keys = acc_GSE_list.keys()
    nkeys = len(acc_GSE_keys)
    ncount = 0
    for acc_GSE in track(
        acc_GSE_list.keys(),
        description="Processing... ",
        disable=self.disable_progressbar,
    ):
        try:
            ncount += 1
            if ncount <= self.skip:
                continue
            elif ncount == self.skip + 1:
                _LOGGER.info(f"Skipped {self.skip} accessions. Starting now.")

            if not self.just_object or not self.acc_anno:
                _LOGGER.info(
                    f"\033[38;5;200mProcessing accession {ncount} of {nkeys}: '{acc_GSE}'\033[0m"
                )

            if len(re.findall(GSE_PATTERN, acc_GSE)) != 1:
                _LOGGER.debug(len(re.findall(GSE_PATTERN, acc_GSE)))
                _LOGGER.warning(
                    "This does not appear to be a correctly formatted GSE accession! "
                    "Continue anyway..."
                )

            if len(acc_GSE_list[acc_GSE]) > 0:
                _LOGGER.info(
                    f"Limit to: {list(acc_GSE_list[acc_GSE])}"
                )  # a list of GSM#s

            # For each GSE acc, produce a series of metadata files
            file_gse = os.path.join(self.metadata_expanded, acc_GSE + "_GSE.soft")
            file_gsm = os.path.join(self.metadata_expanded, acc_GSE + "_GSM.soft")
            file_sra = os.path.join(self.metadata_expanded, acc_GSE + "_SRA.csv")

            if not os.path.isfile(file_gse) or self.refresh_metadata:
                file_gse_content = Accession(acc_GSE).fetch_metadata(
                    file_gse,
                    clean=self.discard_soft,
                    max_soft_size=self.max_soft_size,
                )
            else:
                _LOGGER.info(f"Found previous GSE file: {file_gse}")
                with open(file_gse, "r") as gse_file_obj:
                    file_gse_content = gse_file_obj.read().split("\n")
                file_gse_content = [
                    elem for elem in file_gse_content if len(elem) > 0
                ]

            file_gse_content_dict = gse_content_to_dict(file_gse_content)

            if not os.path.isfile(file_gsm) or self.refresh_metadata:
                file_gsm_content = Accession(acc_GSE).fetch_metadata(
                    file_gsm,
                    typename="GSM",
                    clean=self.discard_soft,
                    max_soft_size=self.max_soft_size,
                )
            else:
                _LOGGER.info(f"Found previous GSM file: {file_gsm}")
                with open(file_gsm, "r") as gsm_file_obj:
                    file_gsm_content = gsm_file_obj.read().split("\n")
                file_gsm_content = [
                    elem for elem in file_gsm_content if len(elem) > 0
                ]

            gsm_enter_dict = acc_GSE_list[acc_GSE]

            # download processed data
            if self.processed:
                (
                    meta_processed_samples,
                    meta_processed_series,
                ) = self.fetch_processed_one(
                    gse_file_content=file_gse_content,
                    gsm_file_content=file_gsm_content,
                    gsm_filter_list=gsm_enter_dict,
                )

                # download processed files:
                if not self.just_metadata:
                    self._download_processed_data(
                        acc_gse=acc_GSE,
                        meta_processed_samples=meta_processed_samples,
                        meta_processed_series=meta_processed_series,
                    )

                # generating PEPs for processed files:
                if self.acc_anno:
                    self._generate_processed_meta(
                        acc_GSE,
                        meta_processed_samples,
                        meta_processed_series,
                        gse_meta_dict=file_gse_content_dict,
                    )

                else:
                    # adding metadata from current experiment to the project
                    processed_metadata_samples.extend(meta_processed_samples)
                    processed_metadata_series.extend(meta_processed_series)

            else:
                # read gsm metadata
                gsm_metadata = self._read_gsm_metadata(
                    acc_GSE, acc_GSE_list, file_gsm_content
                )

                # download sra metadata
                srp_list_result = self._get_SRA_meta(
                    file_gse_content, gsm_metadata, file_sra
                )
                if not srp_list_result:
                    _LOGGER.info("No SRP data, continuing ....")
                    _LOGGER.warning("No raw pep will be created! ....")
                    # delete current acc if no raw data was found
                    # del metadata_dict[acc_GSE]
                    pass
                else:
                    _LOGGER.info("Parsing SRA file to download SRR records")
                gsm_multi_table, gsm_metadata, runs = self._process_sra_meta(
                    srp_list_result, gsm_enter_dict, gsm_metadata
                )

                # download raw data:
                if not self.just_metadata:
                    for run in runs:
                        # download raw data
                        _LOGGER.info(f"Getting SRR: {run}  in ({acc_GSE})")
                        self._download_raw_data(run)
                else:
                    _LOGGER.info("Dry run, no data will be downloaded")

                # save one project
                if self.acc_anno and nkeys > 1:
                    self._write_raw_annotation_new(
                        name=acc_GSE,
                        metadata_dict=gsm_metadata,
                        subannot_dict=gsm_multi_table,
                        gse_meta_dict=file_gse_content_dict,
                    )

                else:
                    metadata_dict_combined.update(gsm_metadata)
                    subannotation_dict_combined.update(gsm_multi_table)
        except Exception as e:
            _LOGGER.warning(f"Couldn't process {acc_GSE}: {e}", exc_info=True)
            continue

    _LOGGER.info(f"Finished processing {len(acc_GSE_list)} accession(s)")

    # Logging cleaning process:
    if self.discard_soft:
        _LOGGER.info("Cleaning soft files ...")
        clean_soft_files(self.metadata_root_full)

    #######################################################################################

    # saving PEPs for processed data
    if self.processed:
        if not self.acc_anno:
            return_value = self._generate_processed_meta(
                name=self.project_name,
                meta_processed_samples=processed_metadata_samples,
                meta_processed_series=processed_metadata_series,
                gse_meta_dict=(
                    file_gse_content_dict if len(acc_GSE_list.keys()) == 1 else None
                ),
            )
            if self.just_object:
                return return_value

    # saving PEPs for raw data
    else:
        return_value = self._write_raw_annotation_new(
            f"{self.project_name}_PEP",
            metadata_dict_combined,
            subannotation_dict_combined,
            gse_meta_dict=(
                file_gse_content_dict if len(acc_GSE_list.keys()) == 1 else None
            ),
        )
        if self.just_object:
            return return_value

fetch_processed_one

fetch_processed_one(gse_file_content, gsm_file_content, gsm_filter_list)

Fetch one processed GSE project and return its metadata.

Parameters:

Name Type Description Default
gse_file_content list[str]

GSE soft file content.

required
gsm_file_content list[str]

GSM soft file content.

required
gsm_filter_list dict

List of GSM that have to be downloaded.

required

Returns:

Type Description
tuple[list, list]

Tuple of (meta_processed_samples, meta_processed_series).

Source code in geofetch/geofetch.py
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
def fetch_processed_one(
    self,
    gse_file_content: list[str],
    gsm_file_content: list[str],
    gsm_filter_list: dict,
) -> tuple[list, list]:
    """
    Fetch one processed GSE project and return its metadata.

    Args:
        gse_file_content: GSE soft file content.
        gsm_file_content: GSM soft file content.
        gsm_filter_list: List of GSM that have to be downloaded.

    Returns:
        Tuple of (meta_processed_samples, meta_processed_series).
    """
    (
        meta_processed_samples,
        meta_processed_series,
    ) = self._get_list_of_processed_files(gse_file_content, gsm_file_content)

    # taking into account list of GSM that is specified in the input file
    meta_processed_samples = _filter_gsm(meta_processed_samples, gsm_filter_list)

    # samples
    meta_processed_samples = self._expand_metadata_list(meta_processed_samples)

    # series
    meta_processed_series = self._expand_metadata_list(meta_processed_series)

    # convert column names to lowercase and underscore
    meta_processed_samples = _standardize_colnames(meta_processed_samples)
    meta_processed_series = _standardize_colnames(meta_processed_series)

    return meta_processed_samples, meta_processed_series

get_projects

get_projects(input, just_metadata=True, discard_soft=True)

Fetch projects from GEO|SRA and return peppy projects.

Parameters:

Name Type Description Default
input str

GSE number, or path to file of GSE numbers.

required
just_metadata bool

Process only metadata.

True
discard_soft bool

Clean run, without downloading soft files.

True

Returns:

Type Description
dict

Peppy project or list of projects, if acc_anno is set.

Source code in geofetch/geofetch.py
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
def get_projects(
    self, input: str, just_metadata: bool = True, discard_soft: bool = True
) -> dict:
    """
    Fetch projects from GEO|SRA and return peppy projects.

    Args:
        input: GSE number, or path to file of GSE numbers.
        just_metadata: Process only metadata.
        discard_soft: Clean run, without downloading soft files.

    Returns:
        Peppy project or list of projects, if acc_anno is set.
    """
    self.just_metadata = just_metadata
    self.just_object = True
    self.discard_soft = discard_soft
    acc_GSE_list = parse_accessions(
        input, self.metadata_expanded, self.just_metadata
    )

    project_dict = {}

    # processed data:
    if self.processed:
        if self.acc_anno:
            self.disable_progressbar = True
            nkeys = len(acc_GSE_list.keys())
            ncount = 0
            self.acc_anno = False
            for acc_GSE in acc_GSE_list.keys():
                ncount += 1
                _LOGGER.info(
                    f"\033[38;5;200mProcessing accession {ncount} of {nkeys}: '{acc_GSE}'\033[0m"
                )
                project_dict.update(self.fetch_all(input=acc_GSE, name=acc_GSE))
        else:
            try:
                project_n = os.path.splitext(os.path.basename(input))[0]
            except TypeError:
                project_n = input
            project_dict.update(self.fetch_all(input=input, name=project_n))

    # raw data:
    else:
        # Not sure about below code...
        if self.acc_anno:
            self.disable_progressbar = True
            self.acc_anno = False
            nkeys = len(acc_GSE_list.keys())
            ncount = 0
            for acc_GSE in acc_GSE_list.keys():
                ncount += 1
                _LOGGER.info(
                    f"\033[38;5;200mProcessing accession {ncount} of {nkeys}: '{acc_GSE}'\033[0m"
                )
                project = self.fetch_all(input=acc_GSE)
                project_dict[acc_GSE + "_raw"] = project

        else:
            try:
                project_n = os.path.splitext(os.path.basename(input))[0]
            except TypeError:
                project_n = input
            ser_dict = self.fetch_all(input=input)
            project_dict[project_n + "_raw"] = ser_dict

    new_pr_dict = {}
    for pr_key in project_dict.keys():
        if project_dict[pr_key]:
            new_pr_dict[pr_key] = project_dict[pr_key]

    return new_pr_dict

Finder Class

Class for searching and finding GSE accessions:

Finder

Finder(filters=None, retmax=RETMAX)

Class for finding GSE accessions in special period of time.

Initialize Finder with optional filters and result limit.

Parameters:

Name Type Description Default
filters str | None

Filters that have to be added to the query. Filter Patterns can be found here: https://www.ncbi.nlm.nih.gov/books/NBK3837/#EntrezHelp.Using_the_Advanced_Search_Pag

None
retmax int

Maximum number of retrieved accessions.

RETMAX
Source code in geofetch/finder.py
32
33
34
35
36
37
38
39
40
41
42
43
44
def __init__(self, filters: str | None = None, retmax: int = RETMAX) -> None:
    """
    Initialize Finder with optional filters and result limit.

    Args:
        filters: Filters that have to be added to the query.
            Filter Patterns can be found here:
            https://www.ncbi.nlm.nih.gov/books/NBK3837/#EntrezHelp.Using_the_Advanced_Search_Pag
        retmax: Maximum number of retrieved accessions.
    """
    self.query_customized_ending = ETOOLS_ENDING.format(retmax=retmax)
    self.query_filter_str = self._create_filter_str(filters)
    self.last_result = []

find_differences staticmethod

find_differences(old_list, new_list)

Compare 2 lists and search for elements that are not in old list.

Parameters:

Name Type Description Default
old_list list[str]

Old list of elements.

required
new_list list[str]

New list of elements.

required

Returns:

Type Description
list[str]

List of elements that are not in old list but are in new_list.

Source code in geofetch/finder.py
135
136
137
138
139
140
141
142
143
144
145
146
147
@staticmethod
def find_differences(old_list: list[str], new_list: list[str]) -> list[str]:
    """
    Compare 2 lists and search for elements that are not in old list.

    Args:
        old_list: Old list of elements.
        new_list: New list of elements.

    Returns:
        List of elements that are not in old list but are in new_list.
    """
    return list(set(new_list) - set(old_list))

generate_file

generate_file(file_path, gse_list=None)

Save the list of GSE accessions stored in this Finder object to a given file.

Parameters:

Name Type Description Default
file_path str

Root to the file where gse accessions have to be saved.

required
gse_list list[str] | None

List of gse accessions.

None
Source code in geofetch/finder.py
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
def generate_file(self, file_path: str, gse_list: list[str] | None = None) -> None:
    """
    Save the list of GSE accessions stored in this Finder object to a given file.

    Args:
        file_path: Root to the file where gse accessions have to be saved.
        gse_list: List of gse accessions.
    """
    if gse_list is None:
        gse_list = self.last_result
    file_dir = os.path.split(file_path)[0]
    if not os.path.exists(file_dir) and file_dir != "":
        _LOGGER.error(f"Path: '{file_dir}' does not exist! No file will be saved")

    with open(file_path, "w") as fp:
        for item in gse_list:
            fp.write("%s\n" % item)
        _LOGGER.info("File has been saved!")

get_gse_all

get_gse_all()

Get list of all gse accessions available in GEO.

Returns:

Type Description
list[str]

List of gse accessions.

Source code in geofetch/finder.py
46
47
48
49
50
51
52
53
def get_gse_all(self) -> list[str]:
    """
    Get list of all gse accessions available in GEO.

    Returns:
        List of gse accessions.
    """
    return self.get_gse_id_by_query(url=self._compose_url())

get_gse_by_date

get_gse_by_date(start_date, end_date=None)

Search gse accessions by providing start date and end date. By default, the last date is today.

Parameters:

Name Type Description Default
start_date str

The oldest date of update (from YYYY/MM/DD to now) [input format: 'YYYY/MM/DD'].

required
end_date str | None

The nearest date of update (from __ to YYYY/MM/DD) [input format: 'YYYY/MM/DD'].

None

Returns:

Type Description
list[str]

List of gse accessions.

Source code in geofetch/finder.py
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
def get_gse_by_date(
    self, start_date: str, end_date: str | None = None
) -> list[str]:
    """
    Search gse accessions by providing start date and end date. By default, the last date is today.

    Args:
        start_date: The oldest date of update (from YYYY/MM/DD to now) [input format: 'YYYY/MM/DD'].
        end_date: The nearest date of update (from __ to YYYY/MM/DD) [input format: 'YYYY/MM/DD'].

    Returns:
        List of gse accessions.
    """
    if end_date is None:
        end_date = TODAY_DATE
    new_date_filter = DATE_FILTER.format(start_date=start_date, end_date=end_date)
    return self.get_gse_id_by_query(url=self._compose_url(new_date_filter))

get_gse_by_day_count

get_gse_by_day_count(n_days=1)

Get list of gse accessions that were uploaded or updated in last X days.

Parameters:

Name Type Description Default
n_days int

Number of days from now [e.g. 5].

1

Returns:

Type Description
list[str]

List of gse accessions.

Source code in geofetch/finder.py
73
74
75
76
77
78
79
80
81
82
83
84
85
86
def get_gse_by_day_count(self, n_days: int = 1) -> list[str]:
    """
    Get list of gse accessions that were uploaded or updated in last X days.

    Args:
        n_days: Number of days from now [e.g. 5].

    Returns:
        List of gse accessions.
    """
    today = datetime.today()
    start_date = today - timedelta(days=n_days)
    start_date_str = start_date.strftime("%Y/%m/%d")
    return self.get_gse_by_date(start_date_str)

get_gse_id_by_query

get_gse_id_by_query(url)

Run esearch (ncbi search tool) by specifying URL and retrieve gse list result.

Parameters:

Name Type Description Default
url str

URL of the query.

required

Returns:

Type Description
list[str]

List of gse ids.

Source code in geofetch/finder.py
106
107
108
109
110
111
112
113
114
115
116
117
118
119
def get_gse_id_by_query(self, url: str) -> list[str]:
    """
    Run esearch (ncbi search tool) by specifying URL and retrieve gse list result.

    Args:
        url: URL of the query.

    Returns:
        List of gse ids.
    """
    uids_list = self._run_search_query(url)
    gse_id_list = [self.uid_to_gse(d) for d in uids_list]
    self.last_result = gse_id_list
    return gse_id_list

get_gse_last_3_month

get_gse_last_3_month()

Get list of gse accessions that were uploaded or updated in last 3 months.

Returns:

Type Description
list[str]

List of gse accessions.

Source code in geofetch/finder.py
55
56
57
58
59
60
61
62
def get_gse_last_3_month(self) -> list[str]:
    """
    Get list of gse accessions that were uploaded or updated in last 3 months.

    Returns:
        List of gse accessions.
    """
    return self.get_gse_id_by_query(url=self._compose_url(THREE_MONTH_FILTER))

get_gse_last_week

get_gse_last_week()

Get list of gse accessions that were uploaded or updated in last week.

Returns:

Type Description
list[str]

List of gse accessions.

Source code in geofetch/finder.py
64
65
66
67
68
69
70
71
def get_gse_last_week(self) -> list[str]:
    """
    Get list of gse accessions that were uploaded or updated in last week.

    Returns:
        List of gse accessions.
    """
    return self.get_gse_by_day_count(7)

uid_to_gse staticmethod

uid_to_gse(uid)

UID to GSE accession converter.

Parameters:

Name Type Description Default
uid str

UID string (Unique Identifier Number in GEO).

required

Returns:

Type Description
str

GSE id string.

Source code in geofetch/finder.py
121
122
123
124
125
126
127
128
129
130
131
132
133
@staticmethod
def uid_to_gse(uid: str) -> str:
    """
    UID to GSE accession converter.

    Args:
        uid: UID string (Unique Identifier Number in GEO).

    Returns:
        GSE id string.
    """
    uid_regex = re.compile(r"[1-9]+0+([1-9]+[0-9]*)")
    return "GSE" + uid_regex.match(uid).group(1)