spectre#

SpECTRE version: 2024.05.11

spectre [OPTIONS] COMMAND [ARGS]...

Options

--version#

Show the version and exit.

--machine#

Show the machine we’re running on and exit.

--debug#

Enable debug logging.

--silent#

Disable all logging.

-b, --build-dir <build_dir>#

Prepend a build directory to the PATH so subprocesses can find executables in it. Without this option, executables are found in the current PATH, and fall back to the build directory in which this Python script is installed.

--profile#

Enable profiling. Expect slower execution due to profiling overhead. A summary of the results is printed to the terminal. Use the ‘–output-profile’ option to write the results to a file.

--output-profile <output_profile>#

Write profiling results to a file. The file can be opened by profiling visualization tools such as ‘pstats’ or ‘gprof2dot’. See the Python ‘cProfile’ docs for details.

-c, --config-file <config_file>#

Configuration file in YAML format. Can provide defaults for command-line options and additional configuration. To specify options for subcommands, list them in a section with the same name as the subcommand. All options that are listed in the help string for a subcommand are supported. Unless otherwise specified in the help string, use the name of the option with dashes replaced by underscores. Example:

status:
starttime: now-2days
state_styles:
RUNNING: blink
plot:
dat:
stylesheet: path/to/stylesheet.mplstyle

The path of the config file can also be specified by setting the ‘SPECTRE_CONFIG_FILE’ environment variable.

Default:

~/.config/spectre.yaml

Environment variables

SPECTRE_CONFIG_FILE

Provide a default for -c

bbh#

Pipeline for binary black hole simulations.

spectre bbh [OPTIONS] COMMAND [ARGS]...

find-horizon#

Find an apparent horizon in volume data.

spectre bbh find-horizon [OPTIONS] [H5_FILES]...

Options

-d, --subfile-name <subfile_name>#

Name of subfile within h5 file containing volume data to plot.

-l, --list-vars#

Print available variables and exit.

-y, --var <vars_patterns>#

Variable to plot. List any tensor components in the volume data file, such as ‘Shift_x’. Also accepts glob patterns like ‘Shift_*’. Can be specified multiple times.

--list-observations, --list-times#

Print all available observation times and exit.

--step <step>#

Observation step number. Specify ‘-1’ or ‘last’ for the last step in the file. Mutually exclusive with ‘–time’.

--time <time>#

Observation time. The observation step closest to the specified time is selected. Mutually exclusive with ‘–step’.

-l, --l-max <l_max>#

Required Max l-mode for the horizon search.

-r, --initial-radius <initial_radius>#

Required Initial coordinate radius of the horizon.

-C, --center <center>#

Required Coordinate center of the horizon.

-o, --output <output>#

H5 output file where the horizon Ylm coefficients will be written. Can be a new or existing file.

--output-coeffs-subfile <output_coeffs_subfile>#

Name of the subfile in the H5 output file where the horizon Ylm coefficients will be written. These can be used to reconstruct the horizon, e.g. to initialize excisions in domains.

--output-coords-subfile <output_coords_subfile>#

Name of the subfile in the H5 output file where the horizon coordinates will be written. These can be used for visualization.

Arguments

H5_FILES#

Optional argument(s)

generate-id#

Generate initial data for a BBH simulation.

Parameters for the initial data will be inserted into the ‘id_input_file_template’. The remaining options are forwarded to the ‘schedule’ command. See ‘schedule’ docs for details.

The orbital parameters can be computed with the function ‘initial_orbital_parameters’ in ‘support.Pipelines.EccentricityControl.InitialOrbitalParameters’.

Intrinsic parameters:

mass_ratio: Defined as q = M_A / M_B >= 1. dimensionless_spin_a: Dimensionless spin of the larger black hole, chi_A. dimensionless_spin_b: Dimensionless spin of the smaller black hole, chi_B.

Orbital parameters:

separation: Coordinate separation D of the black holes. orbital_angular_velocity: Omega_0. radial_expansion_velocity: adot_0.

Scheduling options:

id_input_file_template: Input file template where parameters are inserted. evolve: Set to True to evolve the initial data after generation. pipeline_dir: Directory where steps in the pipeline are created. Required

when ‘evolve’ is set to True. The initial data will be created in a subdirectory ‘001_InitialData’.

run_dir: Directory where the initial data is generated. Mutually exclusive

with ‘pipeline_dir’.

spectre bbh generate-id [OPTIONS]

Options

-q, --mass-ratio <mass_ratio>#

Required Mass ratio of the binary, defined as q = M_A / M_B >= 1.

--dimensionless-spin-A, --chi-A <dimensionless_spin_a>#

Required Dimensionless spin of the larger black hole, chi_A.

--dimensionless-spin-B, --chi-B <dimensionless_spin_b>#

Required Dimensionless spin of the smaller black hole, chi_B.

-D, --separation <separation>#

Coordinate separation D of the black holes.

-w, --orbital-angular-velocity <orbital_angular_velocity>#

Orbital angular velocity Omega_0.

-a, --radial-expansion-velocity <radial_expansion_velocity>#

Radial expansion velocity adot0 which is radial velocity over radius.

-e, --eccentricity <eccentricity>#

Eccentricity of the orbit. Specify together with _one_ of the other orbital parameters. Currently only an eccentricity of 0 is supported (circular orbit).

-l, --mean-anomaly-fraction <mean_anomaly_fraction>#

Mean anomaly of the orbit divided by 2 pi, so it is a number between 0 and 1. The value 0 corresponds to the pericenter of the orbit (closest approach), and the value 0.5 corresponds to the apocenter of the orbit (farthest distance).

--num-orbits <num_orbits>#

Number of orbits until merger. Specify together with a zero eccentricity to compute initial orbital parameters for a circular orbit.

--time-to-merger <time_to_merger>#

Time to merger. Specify together with a zero eccentricity to compute initial orbital parameters for a circular orbit.

-L, --refinement-level <refinement_level>#

h-refinement level.

Default:

0

-P, --polynomial-order <polynomial_order>#

p-refinement level.

Default:

5

--id-input-file-template <id_input_file_template>#

Input file template for the initial data.

Default:

/__w/spectre/spectre/build/bin/python/spectre/Pipelines/Bbh/InitialData.yaml

--evolve#

Evolve the initial data after generation.

-d, --pipeline-dir <pipeline_dir>#

Directory where steps in the pipeline are created.

-E, --executable <executable>#

The executable to run. Can be a path, or just the name of the executable if it’s in the ‘PATH’. If unspecified, the ‘Executable’ listed in the input file metadata is used.

Default:

executable listed in input file

-o, --run-dir <run_dir>#

The directory to which input file, submit script, etc. are copied, relative to which the executable will run, and to which output files are written. Defaults to the current working directory if the input file is already there. Mutually exclusive with ‘–segments-dir’ / ‘-O’.

-O, --segments-dir <segments_dir>#

The directory in which to create the next segment. Requires ‘–from-checkpoint’ or ‘–from-last-checkpoint’ unless starting the first segment.

--copy-executable, --no-copy-executable#

Copy the executable to the run or segments directory. (1) When no flag is specified: If ‘–run-dir’ / ‘-o’ is set, don’t copy. If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission. (2) When ‘–copy-executable’ is specified: If ‘–run-dir’ / ‘-o’ is set, copy to the run directory. If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission. Still don’t copy to individual segments. (3) When ‘–no-copy-executable’ is specified: Never copy.

-C, --clean-output#

Clean up existing output files in the run directory before running the executable. See the ‘spectre clean-output’ command for details.

-f, --force#

Overwrite existing files in the ‘–run-dir’ / ‘-o’. You may also want to use ‘–clean-output’.

--scheduler <scheduler>#

The scheduler invoked to queue jobs on the machine.

Default:

none

--no-schedule#

Run the executable directly, without scheduling it.

--submit-script-template <submit_script_template>#

Path to a submit script. It will be copied to the ‘run_dir’. It can be a [Jinja template](https://jinja.palletsprojects.com/en/3.0.x/templates/) (see main help text for possible placeholders).

Default:

/__w/spectre/spectre/build/bin/python/spectre/support/SubmitTemplate.sh

-J, --job-name <job_name>#

A short name for the job (see main help text for possible placeholders).

Default:

executable name

-j, -c, --num-procs <num_procs>#

Number of worker threads. Mutually exclusive with ‘–num-nodes’ / ‘-N’.

-N, --num-nodes <num_nodes>#

Number of nodes

--queue <queue>#

Name of the queue.

-t, --time-limit <time_limit>#

Wall time limit. Must be compatible with the chosen queue.

-p, --param <extra_params>#

Forward an additional parameter to the input file and submit script templates. Can be specified multiple times. Each entry must be a ‘key=value’ pair, where the key is the parameter name. The value can be an int, float, string, a comma-separated list, an inclusive range like ‘0…3’, an exclusive range like ‘0..3’ or ‘0..<3’, or an exponentiated value or range like ‘2**3’ or ‘10**4…6’. If a parameter is a list or range, multiple runs are scheduled recursively. You can also use the parameter in the ‘job_name’ and in the ‘run_dir’ or ‘segment_dir’, and when scheduling ranges of runs you probably should.

--submit, --no-submit#

Submit jobs automatically. If neither option is specified, a prompt will ask for confirmation before a job is submitted.

--context-file-name <context_file_name>#

Name of the context file that supports resubmissions.

Default:

SchedulerContext.yaml

start-inspiral#

Schedule an inspiral simulation from initial data.

Point the ID_INPUT_FILE_PATH to the input file of your initial data run. Also specify ‘id_run_dir’ if the initial data was run in a different directory than where the input file is. Parameters for the inspiral will be determined from the initial data and inserted into the ‘inspiral_input_file_template’. The remaining options are forwarded to the ‘schedule’ command. See ‘schedule’ docs for details.

## Resource allocation

Runs on 4 nodes by default when scheduled on a cluster. Set ‘num_nodes’ to adjust.

spectre bbh start-inspiral [OPTIONS] ID_INPUT_FILE_PATH

Options

-i, --id-run-dir <id_run_dir>#

Directory of the initial data run. Paths in the input file are relative to this directory.

Default:

directory of the ID_INPUT_FILE_PATH

--inspiral-input-file-template <inspiral_input_file_template>#

Input file template for the inspiral.

Default:

/__w/spectre/spectre/build/bin/python/spectre/Pipelines/Bbh/Inspiral.yaml

-L, --refinement-level <refinement_level>#

h-refinement level.

Default:

1

-P, --polynomial-order <polynomial_order>#

p-refinement level.

Default:

9

--continue-with-ringdown#

Continue with the ringdown simulation once a common horizon has formed.

-d, --pipeline-dir <pipeline_dir>#

Directory where steps in the pipeline are created.

-E, --executable <executable>#

The executable to run. Can be a path, or just the name of the executable if it’s in the ‘PATH’. If unspecified, the ‘Executable’ listed in the input file metadata is used.

Default:

executable listed in input file

-o, --run-dir <run_dir>#

The directory to which input file, submit script, etc. are copied, relative to which the executable will run, and to which output files are written. Defaults to the current working directory if the input file is already there. Mutually exclusive with ‘–segments-dir’ / ‘-O’.

-O, --segments-dir <segments_dir>#

The directory in which to create the next segment. Requires ‘–from-checkpoint’ or ‘–from-last-checkpoint’ unless starting the first segment.

--copy-executable, --no-copy-executable#

Copy the executable to the run or segments directory. (1) When no flag is specified: If ‘–run-dir’ / ‘-o’ is set, don’t copy. If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission. (2) When ‘–copy-executable’ is specified: If ‘–run-dir’ / ‘-o’ is set, copy to the run directory. If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission. Still don’t copy to individual segments. (3) When ‘–no-copy-executable’ is specified: Never copy.

-C, --clean-output#

Clean up existing output files in the run directory before running the executable. See the ‘spectre clean-output’ command for details.

-f, --force#

Overwrite existing files in the ‘–run-dir’ / ‘-o’. You may also want to use ‘–clean-output’.

--scheduler <scheduler>#

The scheduler invoked to queue jobs on the machine.

Default:

none

--no-schedule#

Run the executable directly, without scheduling it.

--submit-script-template <submit_script_template>#

Path to a submit script. It will be copied to the ‘run_dir’. It can be a [Jinja template](https://jinja.palletsprojects.com/en/3.0.x/templates/) (see main help text for possible placeholders).

Default:

/__w/spectre/spectre/build/bin/python/spectre/support/SubmitTemplate.sh

-J, --job-name <job_name>#

A short name for the job (see main help text for possible placeholders).

Default:

executable name

-j, -c, --num-procs <num_procs>#

Number of worker threads. Mutually exclusive with ‘–num-nodes’ / ‘-N’.

-N, --num-nodes <num_nodes>#

Number of nodes

--queue <queue>#

Name of the queue.

-t, --time-limit <time_limit>#

Wall time limit. Must be compatible with the chosen queue.

-p, --param <extra_params>#

Forward an additional parameter to the input file and submit script templates. Can be specified multiple times. Each entry must be a ‘key=value’ pair, where the key is the parameter name. The value can be an int, float, string, a comma-separated list, an inclusive range like ‘0…3’, an exclusive range like ‘0..3’ or ‘0..<3’, or an exponentiated value or range like ‘2**3’ or ‘10**4…6’. If a parameter is a list or range, multiple runs are scheduled recursively. You can also use the parameter in the ‘job_name’ and in the ‘run_dir’ or ‘segment_dir’, and when scheduling ranges of runs you probably should.

--submit, --no-submit#

Submit jobs automatically. If neither option is specified, a prompt will ask for confirmation before a job is submitted.

--context-file-name <context_file_name>#

Name of the context file that supports resubmissions.

Default:

SchedulerContext.yaml

Arguments

ID_INPUT_FILE_PATH#

Required argument

start-ringdown#

Schedule a ringdown simulation from the inspiral.

Point the INSPIRAL_INPUT_FILE_PATH to the input file of the last inspiral segment. Also specify ‘inspiral_run_dir’ if the simulation was run in a different directory than where the input file is. Parameters for the ringdown will be determined from the inspiral and inserted into the ‘ringdown_input_file_template’. The remaining options are forwarded to the ‘schedule’ command. See ‘schedule’ docs for details.

spectre bbh start-ringdown [OPTIONS] INSPIRAL_INPUT_FILE_PATH

Options

-i, --inspiral-run-dir <inspiral_run_dir>#

Directory of the last inspiral segment. Paths in the input file are relative to this directory.

Default:

directory of the INSPIRAL_INPUT_FILE_PATH

--ringdown-input-file-template <ringdown_input_file_template>#

Input file template for the ringdown.

Default:

/__w/spectre/spectre/build/bin/python/spectre/Pipelines/Bbh/Ringdown.yaml

-L, --refinement-level <refinement_level>#

h-refinement level.

Default:

0

-P, --polynomial-order <polynomial_order>#

p-refinement level.

Default:

5

-d, --pipeline-dir <pipeline_dir>#

Directory where steps in the pipeline are created.

-E, --executable <executable>#

The executable to run. Can be a path, or just the name of the executable if it’s in the ‘PATH’. If unspecified, the ‘Executable’ listed in the input file metadata is used.

Default:

executable listed in input file

-o, --run-dir <run_dir>#

The directory to which input file, submit script, etc. are copied, relative to which the executable will run, and to which output files are written. Defaults to the current working directory if the input file is already there. Mutually exclusive with ‘–segments-dir’ / ‘-O’.

-O, --segments-dir <segments_dir>#

The directory in which to create the next segment. Requires ‘–from-checkpoint’ or ‘–from-last-checkpoint’ unless starting the first segment.

--copy-executable, --no-copy-executable#

Copy the executable to the run or segments directory. (1) When no flag is specified: If ‘–run-dir’ / ‘-o’ is set, don’t copy. If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission. (2) When ‘–copy-executable’ is specified: If ‘–run-dir’ / ‘-o’ is set, copy to the run directory. If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission. Still don’t copy to individual segments. (3) When ‘–no-copy-executable’ is specified: Never copy.

-C, --clean-output#

Clean up existing output files in the run directory before running the executable. See the ‘spectre clean-output’ command for details.

-f, --force#

Overwrite existing files in the ‘–run-dir’ / ‘-o’. You may also want to use ‘–clean-output’.

--scheduler <scheduler>#

The scheduler invoked to queue jobs on the machine.

Default:

none

--no-schedule#

Run the executable directly, without scheduling it.

--submit-script-template <submit_script_template>#

Path to a submit script. It will be copied to the ‘run_dir’. It can be a [Jinja template](https://jinja.palletsprojects.com/en/3.0.x/templates/) (see main help text for possible placeholders).

Default:

/__w/spectre/spectre/build/bin/python/spectre/support/SubmitTemplate.sh

-J, --job-name <job_name>#

A short name for the job (see main help text for possible placeholders).

Default:

executable name

-j, -c, --num-procs <num_procs>#

Number of worker threads. Mutually exclusive with ‘–num-nodes’ / ‘-N’.

-N, --num-nodes <num_nodes>#

Number of nodes

--queue <queue>#

Name of the queue.

-t, --time-limit <time_limit>#

Wall time limit. Must be compatible with the chosen queue.

-p, --param <extra_params>#

Forward an additional parameter to the input file and submit script templates. Can be specified multiple times. Each entry must be a ‘key=value’ pair, where the key is the parameter name. The value can be an int, float, string, a comma-separated list, an inclusive range like ‘0…3’, an exclusive range like ‘0..3’ or ‘0..<3’, or an exponentiated value or range like ‘2**3’ or ‘10**4…6’. If a parameter is a list or range, multiple runs are scheduled recursively. You can also use the parameter in the ‘job_name’ and in the ‘run_dir’ or ‘segment_dir’, and when scheduling ranges of runs you probably should.

--submit, --no-submit#

Submit jobs automatically. If neither option is specified, a prompt will ask for confirmation before a job is submitted.

--context-file-name <context_file_name>#

Name of the context file that supports resubmissions.

Default:

SchedulerContext.yaml

Arguments

INSPIRAL_INPUT_FILE_PATH#

Required argument

clean-output#

Deletes output files specified in the input_file from the output_dir, raising an error if the expected output files were not found.

The input_file must list its expected output files in the metadata. They may contain glob patterns:

```yaml
ExpectedOutput:
- Reduction.h5
- Volume*.h5
spectre clean-output [OPTIONS] INPUT_FILE

Options

-o, --output-dir <output_dir>#

Required Output directory of the run to clean up

-f, --force#

Suppress all errors

Arguments

INPUT_FILE#

Required argument

combine-h5#

Combines multiple HDF5 files

spectre combine-h5 [OPTIONS] COMMAND [ARGS]...

combine-h5-dat#

Combines multiple HDF5 dat files

This executable is used for combining a series of HDF5 files, each containing one or more dat files, into a single HDF5 file. A typical use case is to join dat-containing HDF5 files from different segments of a simulation, with each segment containing values of the dat files during different time intervals.

Arguments:

h5files: List of H5 dat files to join output: Output filename. An extension ‘.h5’ will be added if not present. force: If specified, overwrite output file if it already exists

spectre combine-h5 combine-h5-dat [OPTIONS] [H5FILES]...

Options

-o, --output <output>#

Required Combined output filename.

-f, --force#

If the output file already exists, overwrite it.

Arguments

H5FILES#

Optional argument(s)

vol#

Combines volume data spread over multiple H5 files into a single file

The typical use case is to combine volume data from multiple nodes into a single file, if this is necessary for further processing (e.g. for the ‘extend-connectivity’ command). Note that for most use cases it is not necessary to combine the volume data into a single file, as most commands can operate on multiple input H5 files (e.g. ‘generate-xdmf’).

Note that this command does not currently combine volume data from different time steps (e.g. from multiple segments of a simulation). All input H5 files must contain the same set of observation IDs.

spectre combine-h5 vol [OPTIONS] [H5FILES]...

Options

-d, --subfile-name <subfile_name>#

subfile name of the volume file in the H5 file

-o, --output <output>#

Required combined output filename

--check-src, --no-check-src#

flag to check src files, True implies src files exist and can be checked, False implies no src files to check.

Default:

True

Arguments

H5FILES#

Optional argument(s)

delete-subfiles#

Delete subfiles from the ‘H5FILES’

spectre delete-subfiles [OPTIONS] [H5FILES]...

Options

-d, --subfile <subfiles>#

Required Subfile to delete. Can be specified multiple times to delete many subfiles at once.

--repack, --no-repack#

Repack the H5 files after deleting subfiles to reduce file size. Otherwise, the subfiles are deleted but the file size remains unchanged.

Arguments

H5FILES#

Optional argument(s)

extend-connectivity#

Extend the connectivity inside a single HDF5 volume file.

This extended connectivity is for some SpECTRE evolution, in order to fill in gaps between elements. Intended to be used as a post-processing routine to improve the quality of visualizations. Note: This does not work with subcell or AMR systems, and the connectivity only extends within each block and not between them. This only works for a single HDF5 volume file. If there are multiple files, the combine-h5 executable must be run first. The extend-connectivity command can then be run on the newly generated HDF5 file.

spectre extend-connectivity [OPTIONS] FILENAME

Options

-d, --subfile-name <subfile_name>#

Required subfile name of the volume file in the H5 file (omit file extension)

Arguments

FILENAME#

Required argument

extract-dat#

Extract dat files from an H5 file

Extract all Dat files inside a SpECTRE HDF5 file. The resulting files will be put into the ‘OUT_DIR’ if specified, or printed to standard output. The directory structure will be identical to the group structure inside the HDF5 file.

spectre extract-dat [OPTIONS] FILENAME [OUT_DIR]

Options

-j, --num-cores <num_cores>#

Number of cores to run on.

Default:

1

-p, --precision <precision>#

Precision with which to save (or print) the data.

Default:

16

-f, --force#

If the output directory already exists, overwrite it.

-l, --list#

List all dat files in the HDF5 file and exit.

-d, --subfile <subfiles>#

Full path of subfile to extract (including extension). Can be specified multiple times to extract several subfiles at once. If unspecified, all subfiles will be extracted.

Arguments

FILENAME#

Required argument

OUT_DIR#

Optional argument

extract-input#

Extract input file from an H5 file

Extract InputSource.yaml from the ‘H5_FILE’ and write it to the ‘OUTPUT_FILE’, or print to stdout if OUTPUT_FILE is unspecified.

spectre extract-input [OPTIONS] H5_FILE [OUTPUT_FILE]

Arguments

H5_FILE#

Required argument

OUTPUT_FILE#

Optional argument

generate-xdmf#

Generate an XDMF file for ParaView and VisIt

Read volume data from the ‘H5FILES’ and generate an XDMF file. The XDMF file points into the ‘H5FILES’ files so ParaView and VisIt can load the volume data. To process multiple files suffixed with the node number and from multiple segments specify a glob like ‘Segment*/VolumeData*.h5’.

To load the XDMF file in ParaView you must choose the ‘Xdmf Reader’, NOT ‘Xdmf3 Reader’.

Arguments:

h5files: List of H5 volume data files. output: Output filename. A ‘.xmf’ extension is added if not present. subfile_name: Volume data subfile in the H5 files. relative_paths: If True, use relative paths in the XDMF file (default). If

False, use absolute paths.

start_time: Optional. The earliest time at which to start visualizing. The

start-time value is included.

stop_time: Optional. The time at which to stop visualizing. The stop-time

value is not included.

stride: Optional. View only every stride’th time step. coordinates: Optional. Name of coordinates dataset. Default:

“InertialCoordinates”.

spectre generate-xdmf [OPTIONS] [H5FILES]...

Options

-o, --output <output>#

Output file name. A ‘.xmf’ extension will be added if not present. If unspecified, the output will be written to stdout.

-d, --subfile-name <subfile_name>#

Name of the volume data subfile in the H5 files. A ‘.vol’ extension is added if needed. If unspecified, and the first H5 file contains only a single ‘.vol’ subfile, choose that. Otherwise, list all ‘.vol’ subfiles and exit.

--relative-paths, --absolute-paths#

Use relative paths or absolute paths in the XDMF file.

Default:

True

--stride <stride>#

View only every stride’th time step

--start-time <start_time>#

The earliest time at which to start visualizing. The start-time value is included.

--stop-time <stop_time>#

The time at which to stop visualizing. The stop-time value is not included.

--coordinates <coordinates>#

The coordinates to use for visualization

Default:

InertialCoordinates

Arguments

H5FILES#

Optional argument(s)

interpolate-to-mesh#

Interpolates an h5 file to a desired grid

The function reads data from source_volume_data inside source_file_path, interpolates all components specified by components_to_interpolate to the grid specified by target_mesh and writes the results into target_volume_data inside target_file_path. The target_file_path can be the same as the source_file_path if the volume subfile paths are different.

Parameters#

source_file_path: str

the path to the source file where the source_volume_data is

target_mesh: spectre.Spectral.Mesh

the mesh to which the data is interpolated

components_to_interpolate: list of str, optional

a list of all components that are to be interpolated. accepts regular expressions. By default ALL tensor components are interpolated.

target_file_path: str, optional

the path to where the interpolated data is written. By default this is set to source_file_path so the interpolated data is written to the same file, but in a different subfile specified by target_volume_data.

source_volume_data: str, optional

the name of the .vol file inside the source file where the source data can be found.

target_volume_data: str, optional

the name of the .vol file inside the target file where the target data is written.

obs_start: float, optional

disregards all observations with observation value strictly before obs_start

obs_end: float, optional

disregards all observations with observation value strictly after obs_end

obs_stride: float, optional

will only take every obs_stride observation

spectre interpolate-to-mesh [OPTIONS]

Options

--source-file-prefix <source_file_prefix>#

Required The prefix for the .h5 source files. All files starting with the prefix followed by a number will be interpolated.

--source-subfile-name <source_subfile_name>#

Required The name of the volume data subfile within the source files in which the data is contained

--target-file-prefix <target_file_prefix>#

The prefix for the target files where the interpolated data is written. When no target file is specified, the interpolated data is written to the corresponding source file in a new volume data subfile.

--target-subfile-name <target_subfile_name>#

Required The name of the volume data subfile within the target files where the data will be written.

-t, --tensor-component <tensor_components>#

The name of the tensor to be interpolated. Accepts regular expression. Can be specified multiple times to interpolate several tensors at once. If none are specified, all tensors are interpolated.

--target-extents <target_extents>#

Required The extents of the target grid, as a comma-separated list without spaces. Can be different for each dimension e.g. ‘3,5,4’

--target-basis <target_basis>#

Required The basis of the target grid.

Options:

Legendre | Chebyshev | FiniteDifference | SphericalHarmonic

--target-quadrature <target_quadrature>#

Required The quadrature of the target grid.

Options:

Gauss | GaussLobatto | CellCentered | FaceCentered | Equiangular

--start-time <start_time>#

Disregard all observations with value before this point

--stop-time <stop_time>#

Disregard all observations with value after this point

--stride <stride>#

Stride through observations with this step size.

-j, --num-jobs <num_jobs>#

The maximum number of processes to be started. A process is spawned for each source file up to this number.

interpolate-to-points#

Interpolate volume data to target coordinates.

spectre interpolate-to-points [OPTIONS] [H5_FILES]...

Options

-d, --subfile-name <subfile_name>#

Name of subfile within h5 file containing volume data to plot.

-l, --list-vars#

Print available variables and exit.

-y, --var <vars_patterns>#

Variable to plot. List any tensor components in the volume data file, such as ‘Shift_x’. Also accepts glob patterns like ‘Shift_*’. Can be specified multiple times.

--list-observations, --list-times#

Print all available observation times and exit.

--step <step>#

Observation step number. Specify ‘-1’ or ‘last’ for the last step in the file. Mutually exclusive with ‘–time’.

--time <time>#

Observation time. The observation step closest to the specified time is selected. Mutually exclusive with ‘–step’.

-t, --target-coords-file <target_coords_file>#

Text file with target coordinates to interpolate to. Must have ‘dim’ columns with Cartesian coordinates. Rows enumerate points. Can be the output of ‘numpy.savetxt’.

-p, --target-coords <target_coords>#

List target coordinates explicitly, e.g. ‘0,0,0’. Can be specified multiple times to quickly interpolate to a couple of target points.

-o, --output <output>#

Output text file

--delimiter <delimiter>#

Delimiter separating columns for both the ‘–target-coords-file’ and the ‘–output’ file.

Default:

whitespace

-j, --num-threads <num_threads>#

Number of threads to use for interpolation. Only available if compiled with OpenMP. Parallelization is over volume data files, so this only has an effect if multiple files are specified.

Default:

all available cores

Arguments

H5_FILES#

Optional argument(s)

plot-command#

Plot data from simulations

See subcommands for available plots.

spectre plot-command [OPTIONS] COMMAND [ARGS]...

along-line#

Plot variables along a line through volume data

Interpolates the volume data in the H5_FILES to a line and plots the selected variables. You choose the line by specifying the start and end points.

Either select a specific observation in the volume data with ‘–step’ or ‘–time’, or specify ‘–animate’ to produce an animation over all observations.

spectre plot-command along-line [OPTIONS] [H5_FILES]...

Options

-d, --subfile-name <subfile_name>#

Name of subfile within h5 file containing volume data to plot.

-l, --list-vars#

Print available variables and exit.

-y, --var <vars_patterns>#

Variable to plot. List any tensor components in the volume data file, such as ‘Shift_x’. Also accepts glob patterns like ‘Shift_*’. Can be specified multiple times.

--list-observations, --list-times#

Print all available observation times and exit.

--step <step>#

Observation step number. Specify ‘-1’ or ‘last’ for the last step in the file. Mutually exclusive with ‘–time’.

--time <time>#

Observation time. The observation step closest to the specified time is selected. Mutually exclusive with ‘–step’.

-A, --line-start <line_start>#

Coordinates of the start of the line through the volume data. Specify as comma-separated list, e.g. ‘0,0,0’.

-B, --line-end <line_end>#

Coordinates of the end of the line through the volume data. Specify as comma-separated list, e.g. ‘1,0,0’.

-N, --num-samples <num_samples>#

Number of uniformly spaced samples along the line to which volume data is interpolated.

Default:

200

-j, --num-threads <num_threads>#

Number of threads to use for interpolation. Only available if compiled with OpenMP. Parallelization is over volume data files, so this only has an effect if multiple files are specified.

Default:

all available cores

--y-bounds <y_bounds>#

The lower and upper bounds of the y-axis.

--animate#

Animate over all observations.

--interval <interval>#

Delay between frames in milliseconds. Only used for animations.

-s, --stylesheet <stylesheet>#

Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.

-o, --output <output>#

Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).

Arguments

H5_FILES#

Optional argument(s)

Environment variables

SPECTRE_MPL_STYLESHEET

Provide a default for --stylesheet

control-system#

Plot diagnostic information regarding all control systems except size control. If you want size control diagnostics use spectre plot size-control.

This tool assumes there are subfiles in each of the “reduction-files” with the path /ControlSystems/{Name}/*.dat, where {NAME} is the name of the control system and *.dat are all the components of that control system.

Shape control is a bit special because it has a large number of components. Control whether or not you plot shape, and how many of these components you plot, with the –with-shape/–without-shape, –shape-l_max, and –show-all-m options.

spectre plot-command control-system [OPTIONS] [REDUCTION_FILES]...

Options

--with-shape, --without-shape#

Wether or not to plot shape control.

Default:

True

-l, --shape-l_max <shape_l_max>#

The max number of spherical harmonics to show on the plot. Since higher ell can have a lot of components, it may be desirable to show fewer components. Never plots l=0,1 since we don’t control these components. Only used if ‘–with-shape’.

Default:

2

--show-all-m#

When plotting shape control, for a given ell, plot all m components. Default is, for a given ell, to plot the L2 norm over all the m components. Only used if ‘–with-shape’.

Default:

False

--x-bounds <x_bounds>#

The lower and upper bounds of the x-axis.

--x-label <x_label>#

The label on the x-axis.

-t, --title <title>#

Title of the graph.

-s, --stylesheet <stylesheet>#

Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.

-o, --output <output>#

Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).

Arguments

REDUCTION_FILES#

Optional argument(s)

Environment variables

SPECTRE_MPL_STYLESHEET

Provide a default for --stylesheet

dat#

Plot columns in ‘.dat’ datasets in H5 files

spectre plot-command dat [OPTIONS] H5_FILE

Options

-d, --subfile-name <subfile_name>#

The dat subfile to read. If unspecified, all available dat subfiles will be printed.

-l, --legend-only#

Print out the available quantities and exit.

-y, --function <functions>#

The quantity to plot. Can be specified multiple times to plot several quantities on a single figure. If unspecified, list all available quantities and exit. Labels of quantities can be specified as key-value pairs such as ‘Error(Psi)=$L_2(psi)$’. Remember to wrap the key-value pair in quotes on the command line to avoid issues with special characters or spaces.

-x, --x-axis <x_axis>#

Select the column in the dat file uses as the x-axis in the plot.

Default:

first column in the dat file

--x-label <x_label>#

The label on the x-axis.

Default:

name of the x-axis column

--y-label <y_label>#

The label on the y-axis.

Default:

no label

--x-logscale#

Set the x-axis to log scale.

--y-logscale#

Set the y-axis to log scale.

--x-bounds <x_bounds>#

The lower and upper bounds of the x-axis.

--y-bounds <y_bounds>#

The lower and upper bounds of the y-axis.

-t, --title <title>#

Title of the graph.

Default:

subfile name

-s, --stylesheet <stylesheet>#

Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.

-o, --output <output>#

Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).

Arguments

H5_FILE#

Required argument

Environment variables

SPECTRE_MPL_STYLESHEET

Provide a default for --stylesheet

elliptic-convergence#

Plot elliptic solver convergence

spectre plot-command elliptic-convergence [OPTIONS] H5_FILE

Options

--linear-residuals-subfile-name <linear_residuals_subfile_name>#

The name of the subfile containing the linear solver residuals

Default:

GmresResiduals.dat

--nonlinear-residuals-subfile-name <nonlinear_residuals_subfile_name>#

The name of the subfile containing the nonlinear solver residuals

Default:

NewtonRaphsonResiduals.dat

-s, --stylesheet <stylesheet>#

Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.

-o, --output <output>#

Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).

Arguments

H5_FILE#

Required argument

Environment variables

SPECTRE_MPL_STYLESHEET

Provide a default for --stylesheet

memory-monitors#

Plot the memory usage of a simulation from the MemoryMonitors data in the Reductions H5 file.

This tool assumes there is a group in each of the “reduction-files” with the path “/MemoryMonitors/” that holds dat files for each parallel component that was monitored.

Note that the totals plotted here are not necessary the total memory usage of the simulation. The memory monitors only capture what is inside ‘pup’functions. Any memory that cannot be captured by a ‘pup’ function will not by represented by this plot.

spectre plot-command memory-monitors [OPTIONS] [REDUCTION_FILES]...

Options

--use-mb, --use-gb#

Plot the y-axis in Megabytes or Gigabytes

Default:

False

--x-label <x_label>#

The label on the x-axis.

Default:

name of the x-axis column

--x-bounds <x_bounds>#

The lower and upper bounds of the x-axis.

-s, --stylesheet <stylesheet>#

Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.

-o, --output <output>#

Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).

Arguments

REDUCTION_FILES#

Optional argument(s)

Environment variables

SPECTRE_MPL_STYLESHEET

Provide a default for --stylesheet

power-monitors#

Plot power monitors from volume data

Reads volume data in the ‘H5_FILES’ and computes power monitors, which are essentially the spectral modes in each dimension of the grid. They give an indication how well the spectral expansion resolves fields on the grid. Power monitors are computed for all tensor components selected with the ‘–var’ / ‘-y’ option, and combined as an L2 norm.

One subplot is created for every selected ‘–block’ / ‘-b’. This can be a single block name, or a block group defined by the domain (such as all six wedges in a spherical shell). The power monitors in every logical direction of the grid are plotted for all elements in the block or block group. The logical directions are labeled “xi”, “eta” and “zeta”, and their orientation is defined by the coordinate maps in the domain. For example, see the documentation of the ‘Wedge’ map to understand which logical direction is radial in spherical shells.

spectre plot-command power-monitors [OPTIONS] [H5_FILES]...

Options

-d, --subfile-name <subfile_name>#

Name of subfile within h5 file containing volume data to plot.

-l, --list-vars#

Print available variables and exit.

-y, --var <vars_patterns>#

Variable to plot. List any tensor components in the volume data file, such as ‘Shift_x’. Also accepts glob patterns like ‘Shift_*’. Can be specified multiple times.

--list-observations, --list-times#

Print all available observation times and exit.

--step <step>#

Observation step number. Specify ‘-1’ or ‘last’ for the last step in the file. Mutually exclusive with ‘–time’.

--time <time>#

Observation time. The observation step closest to the specified time is selected. Mutually exclusive with ‘–step’.

--list-blocks#

Print available blocks and block groups and exit.

-b, --block <block_or_group_names>#

Name of block or block group to analyze. Can be specified multiple times to plot several block(groups) at once.

-e, --elements <element_patterns>#

Include only elements that match the specified glob pattern, like ‘B*,(L1I*,L0I0,L0I0)’. Can be specified multiple times, in which case elements are included that match _any_ of the specified patterns. If unspecified, include all elements in the blocks.

--list-elements#

List all elements in the specified blocks subject to ‘–elements’ / ‘-e’ patterns.

-T, --over-time#

Plot power monitors over time.

--figsize <figsize>#

Figure size in inches.

-s, --stylesheet <stylesheet>#

Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.

-o, --output <output>#

Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).

Arguments

H5_FILES#

Optional argument(s)

Environment variables

SPECTRE_MPL_STYLESHEET

Provide a default for --stylesheet

size-control#

Plot diagnostic information regarding the Size control system.

This tool assumes there is a subfile in each of the “reduction-files” with the path /ControlSystems/Size{LABEL}/Diagnostics.dat, where {LABEL} is replaced with the “object-label” input option.

spectre plot-command size-control [OPTIONS] [REDUCTION_FILES]...

Options

-d, --object-label <object_label>#

Required Which object to plot. This is either ‘A’, ‘B’, or ‘None’. ‘None’ is used when there is only one black hole in the simulation.

--x-bounds <x_bounds>#

The lower and upper bounds of the x-axis.

--x-label <x_label>#

The label on the x-axis.

-t, --title <title>#

Title of the graph.

-s, --stylesheet <stylesheet>#

Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.

-o, --output <output>#

Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).

Arguments

REDUCTION_FILES#

Optional argument(s)

Environment variables

SPECTRE_MPL_STYLESHEET

Provide a default for --stylesheet

slice#

Plot variables on a slice through volume data

Interpolates the volume data in the H5_FILES to a slice and plots the selected variables. You choose the slice by specifying its center, extents, normal, and up direction.

Either select a specific observation in the volume data with ‘–step’ or ‘–time’, or specify ‘–animate’ to produce an animation over all observations.

spectre plot-command slice [OPTIONS] [H5_FILES]...

Options

-d, --subfile-name <subfile_name>#

Name of subfile within h5 file containing volume data to plot.

-l, --list-vars#

Print available variables and exit.

-y, --var <vars_patterns>#

Variable to plot. List any tensor components in the volume data file, such as ‘Shift_x’. Also accepts glob patterns like ‘Shift_*’.

--list-observations, --list-times#

Print all available observation times and exit.

--step <step>#

Observation step number. Specify ‘-1’ or ‘last’ for the last step in the file. Mutually exclusive with ‘–time’.

--time <time>#

Observation time. The observation step closest to the specified time is selected. Mutually exclusive with ‘–step’.

-C, --slice-origin, --slice-center <slice_origin>#

Coordinates of the center of the slice through the volume data. Specify as comma-separated list, e.g. ‘0,0,0’.

-X, --slice-extent <slice_extent>#

Extent in both directions of the slice through the volume data, e.g. ‘-X 10 10’ for a 10x10 slice in the coordinates of the volume data.

-n, --slice-normal <slice_normal>#

Direction of the normal of the slice through the volume data. Specify as comma-separated list, e.g. ‘0,0,1’ for a slice in the xy-plane.

-u, --slice-up <slice_up>#

Up-direction of the slice through the volume data. Specify as comma-separated list, e.g. ‘0,1,0’ so the y-axis is the vertical axis of the plot.

-N, --num-samples <num_samples>#

Number of uniformly spaced samples along each direction of the slice to which volume data is interpolated.

Default:

200, 200

-j, --num-threads <num_threads>#

Number of threads to use for interpolation. Only available if compiled with OpenMP. Parallelization is over volume data files, so this only has an effect if multiple files are specified.

Default:

all available cores

-t, --title <title>#

Title for the plot.

Default:

name of the variable

--y-bounds, --data-bounds <data_bounds>#

Lower and upper bounds for the color scale of the plot.

--animate#

Animate over all observations.

--interval <interval>#

Delay between frames in milliseconds. Only used for animations.

-s, --stylesheet <stylesheet>#

Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.

-o, --output <output>#

Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).

Arguments

H5_FILES#

Optional argument(s)

Environment variables

SPECTRE_MPL_STYLESHEET

Provide a default for --stylesheet

render-1d#

Render 1D data

spectre render-1d [OPTIONS] [H5_FILES]...

Options

-d, --subfile-name <subfile_name>#

Name of subfile within h5 file containing 1D volume data to be rendered.

-y, --var <vars>#

Name of variable to plot, e.g. ‘Psi’ or ‘Error(Psi)’. Can be specified multiple times. If unspecified, plot all available variables. Labels for variables can be specified as key-value pairs such as ‘Error(Psi)=$L_2(psi)$’. Remember to wrap the key-value pair in quotes on the command line to avoid issues with special characters or spaces.

-l, --list-vars#

Print available variables and exit.

--step <step>#

If specified, renders the integer observation step instead of an animation. Set to ‘-1’ for the last step.

--interval <interval>#

Delay between frames in milliseconds

--x-label <x_label>#

The label on the x-axis.

Default:

name of the x-axis column

--y-label <y_label>#

The label on the y-axis.

Default:

no label

--x-logscale#

Set the x-axis to log scale.

--y-logscale#

Set the y-axis to log scale.

--x-bounds <x_bounds>#

The lower and upper bounds of the x-axis.

--y-bounds <y_bounds>#

The lower and upper bounds of the y-axis.

-t, --title <title>#

Title of the graph.

Default:

subfile name

-s, --stylesheet <stylesheet>#

Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.

-o, --output <output>#

Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).

--show-collocation-points#
--show-element-boundaries#
--show-basis-polynomials#

Arguments

H5_FILES#

Optional argument(s)

Environment variables

SPECTRE_MPL_STYLESHEET

Provide a default for --stylesheet

render-3d-command#

Renders a 3D visualization of simulation data.

See subcommands for possible renderings.

spectre render-3d-command [OPTIONS] COMMAND [ARGS]...

clip#

Renders a clip normal to the z-direction.

XMF_FILE is the path to the XMF file that references the simulation data. It is typically generated by the ‘generate-xdmf’ command.

This is a quick way to get some insight into the simulation data. For more advanced renderings, open the XMF file in an interactive ParaView session, or implement rendering commands specialized for your use case.

spectre render-3d-command clip [OPTIONS] XMF_FILE

Options

-o, --output <output>#

Required Output file. Include extension such as ‘.png’.

-y, --variable <variable>#

Variable to plot. Lists available variables when not specified.

-t, --time-step <time_step>#

Select a time step. Specify ‘-1’ or ‘last’ to select the last time step.

Default:

first

--animate#

Produce an animation of all time steps.

--log#

Plot variable in log scale.

--show-grid#

Show grid lines

--zoom <zoom_factor>#

Zoom factor.

--clip-origin <clip_origin>#

Origin of the clipping plane

Default:

0.0, 0.0, 0.0

--clip-normal <clip_normal>#

Normal of the clipping plane

Default:

0.0, 0.0, 1.0

Arguments

XMF_FILE#

Required argument

domain#

Renders a 3D domain with elements and grid lines

This rendering is a starting point for visualizations of the domain geometry, e.g. for publications.

XMF_FILE is the path to the XMF file that references the simulation data. It is typically generated by the ‘generate-xdmf’ command. You can also provide a second XMF file with higher resolution data, which is used to render the outlines of elements to make them smoother.

spectre render-3d-command domain [OPTIONS] XMF_FILE [HI_RES_XMF_FILE]

Options

-o, --output <output>#

Required Output file. Include extension such as ‘.png’.

-t, --time-step <time_step>#

Select a time step. Specify ‘-1’ or ‘last’ to select the last time step.

Default:

first

--animate#

Produce an animation of all time steps.

--zoom <zoom_factor>#

Zoom factor.

--camera-theta <camera_theta>#

Viewing angle from the z-axis in degrees.

Default:

0.0

--camera-phi <camera_phi>#

Viewing angle around the z-axis in degrees.

Default:

0.0

--clip-origin, --slice-origin <clip_origin>#

Origin of the clipping plane

Default:

0.0, 0.0, 0.0

--clip-normal, --slice-normal <clip_normal>#

Normal of the clipping plane

Default:

0.0, 0.0, 1.0

--slice, --clip#

Use a slice instead of a clip.

Default:

False

Arguments

XMF_FILE#

Required argument

HI_RES_XMF_FILE#

Optional argument

resubmit#

Create the next segment in the SEGMENTS_DIR and schedule it

Arguments:
segments_dir: Path to the segments directory, or path to the last segment

in the segments directory. The next segment will be created here.

context_file_name: Optional. Name of the file that stores the context

for resubmissions in the ‘run_dir’. This file gets created by ‘spectre.support.schedule’. (Default: “SchedulerContext.yaml”)

Returns: The ‘subprocess.CompletedProcess’ representing the process

that scheduled the run. Returns ‘None’ if no run was scheduled.

spectre resubmit [OPTIONS] [SEGMENTS_DIRS]...

Options

--submit, --no-submit#

Submit jobs automatically. If neither option is specified, a prompt will ask for confirmation before a job is submitted.

--context-file-name <context_file_name>#

Name of the context file that supports resubmissions.

Default:

SchedulerContext.yaml

Arguments

SEGMENTS_DIRS#

Optional argument(s)

run-next#

Run the next entrypoint specified in the input file metadata

Invokes the Python function specified in the ‘Next’ section of the input file metadata. It can be specified like this:

```yaml
# Input file metadata
Next:
Run: spectre.Pipelines.Bbh.Inspiral:start_inspiral
With:
# Arguments to pass to the function
submit: True
# Rest of the input file

The function will be invoked in the cwd directory (’–input-run-dir’ / ‘-i’), which defaults to the directory of the input file. The following special values can be used for the arguments:

  • ‘__file__’: The (absolute) path of the input file.

  • ‘None’: The Python value None.

Arguments:
next_entrypoint: The Python function to run. Must be a dictionary with

the following keys: - “Run”: The Python module and function to run, separated by a colon.

For example, “spectre.Pipelines.Bbh.Ringdown:start_ringdown”.

  • “With”: A dictionary of arguments to pass to the function.

input_file_path: Path to the input file that specified the entrypoint.

Used to resolve ‘__file__’ in the entrypoint arguments.

cwd: The working directory in which to run the entrypoint. Used to

resolve relative paths in the entrypoint arguments.

spectre run-next [OPTIONS] INPUT_FILE_PATH

Options

-i, --input-run-dir <input_run_dir>#

Directory where the input file ran. Paths in the input file are relative to this directory.

Default:

directory of the INPUT_FILE_PATH

Arguments

INPUT_FILE_PATH#

Required argument

schedule#

Schedule executable runs with an input file

Configures the input file, submit script, etc. to the ‘run_dir’, and then invokes the ‘scheduler’ to submit the run (typically “sbatch”). You can also bypass the scheduler and run the executable directly by setting the ‘scheduler’ to ‘None’.

# Selecting the executable

Specify either a path to the executable, or just its name if it’s in the ‘PATH’. If unspecified, the ‘Executable’ listed in the input file metadata is used.

By default, the executable and submit scripts will be copied to the segments directory to support resubmissions (see below). See the ‘copy_executable’ argument docs for details on controlling this behavior.

# Segments and run directories

You can set either the ‘run_dir’ or the ‘segments_dir’ to specify where the executable will run (but not both). If you specify a ‘run_dir’, the executable will run in it directly. If you specify a ‘segments_dir’, a new segment will be created and used as the ‘run_dir’. Segments are named with incrementing integers and continue the run from the previous segment. For example, the following is a typical ‘segments_dir’:

```sh
# Copy of the executable
MyExecutable
# Copy of the submit script template (base and machine-specific)
SubmitTemplateBase.sh
SubmitTemplate.sh
# One segment per day
Segment_0000/
InputFile.yaml
Submit.sh
Output.h5
# Occasional checkpoints, and a checkpoint before termination
Checkpoints/
Checkpoint_0000/
Checkpoint_0001/…
# Next segment continues from last checkpoint of previous segment
Segment_0001/…

You can omit the ‘run_dir’ if the current working directory already contains the input file.

# Placeholders

The input file, submit script, ‘run_dir’, ‘segments_dir’, and ‘job_name’ can have placeholders like ‘{{ num_nodes }}’. They must conform to the [Jinja template format](https://jinja.palletsprojects.com/en/3.0.x/templates/). The placeholders are resolved in the following stages. The following parameters are available as placeholders:

  1. ‘job_name’ (if specified):

- All arguments to this function, including all additional ‘–params’.
For example, the additional ‘–params’ can include parameters
controlling resolution in the input file.
- ‘executable_name’: Just the name of the executable (basename of the
‘executable’ path).
  1. ‘run_dir’ and ‘segments_dir’:

- All parameters from the previous stage.
- ‘job_name’: Either the resolved ‘job_name’ from the previous stage, or
the ‘executable_name’ if unspecified.
  1. Input file & submit script:

- All parameters from the previous stages.
- ‘run_dir’: Absolute path to the ‘run_dir’.
- ‘segments_dir’: Absolute path to the ‘segments_dir’, or ‘None’ if no
segments directory is available.
- ‘input_file’: Relative path to the configured input file (in the
‘run_dir’).
- ‘out_file’: Absolute path to the log file (in the ‘run_dir’).
- ‘spectre_cli’: Absolute path to the SpECTRE CLI.
- Typical additional parameters used in submit scripts are ‘queue’ and
‘time_limit’.

The parameters used to render the submit script are stored in a context file (named ‘context_file_name’) in the ‘run_dir’ to support resubmissions. The context file is used by ‘spectre.support.resubmit’ to schedule the next segment using the same parameters.

# Scheduling multiple runs

You can pass ranges of parameters to the ‘–params’ of this function to schedule multiple runs using the same input file template. For example, you can do an h-convergence test by using a placeholder for the refinement level in your input file:

```yaml
# In the domain creator:
InitialRefinement: {{ lev }}

When a parameter in ‘–params’ is an iterable, the ‘schedule’ function will recurse for every element in the iterable. For example, you can schedule multiple runs for a convergence test like this:

```py
schedule(
run_dir=”Lev{{ lev }}”,
# …
lev=range(1, 3))
Arguments:
input_file_template: Path to an input file. It will be copied to the

‘run_dir’. It can be a Jinja template (see above).

scheduler: ‘None’ to run the executable directly, or a scheduler such as

“sbatch” to submit the run to a queue.

no_schedule: Optional. If ‘True’, override the ‘scheduler’ to ‘None’.

Useful to specify on the command line where the ‘scheduler’ defaults to “sbatch” on clusters.

executable: Path or name of the executable to run. If unspecified, use the

‘Executable’ set in the input file metadata.

run_dir: The directory to which input file, submit script, etc. are

copied, and relative to which the executable will run. Can be a Jinja template (see above).

segments_dir: The directory in which a new segment is created as the

‘run_dir’. Mutually exclusive with ‘run_dir’. Can be a Jinja template (see above).

copy_executable: Copy the executable to the run or segments directory.
By default (when set to ‘None’):
  • If ‘–run-dir’ / ‘-o’ is set, don’t copy.

  • If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission.

When set to ‘True’:
  • If ‘–run-dir’ / ‘-o’ is set, copy to the run directory.

  • If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission. Still don’t copy to individual segments.

When set to ‘False’: Never copy.

job_name: Optional. A string describing the job.

Can be a Jinja template (see above). (Default: executable name)

submit_script_template: Optional. Path to a submit script. It will be

copied to the ‘run_dir’ if a ‘scheduler’ is set. Can be a Jinja template (see above). (Default: value of ‘default_submit_script_template’)

from_checkpoint: Optional. Path to a checkpoint directory. input_file_name: Optional. Filename of the input file in the ‘run_dir’.

(Default: basename of the ‘input_file_template’)

submit_script_name: Optional. Filename of the submit script. (Default:

“Submit.sh”)

out_file_name: Optional. Name of the log file. (Default:

“spectre.out”)

context_file_name: Optional. Name of the file that stores the context

for resubmissions in the run_dir. Used by spectre.support.resubmit. (Default: “SchedulerContext.yaml”)

submit: Optional. If ‘True’, automatically submit jobs using the

‘scheduler’. If ‘False’, skip the job submission. If ‘None’, prompt for confirmation before submitting.

clean_output: Optional. When ‘True’, use

‘spectre.tools.CleanOutput.clean_output’ to clean up existing output files in the ‘run_dir’ before scheduling the run. (Default: ‘False’)

force: Optional. When ‘True’, overwrite input file and submit script

in the ‘run_dir’ instead of raising an error when they already exist.

extra_params: Optional. Dictionary of extra parameters passed to input

file and submit script templates. Parameters can also be passed as keyword arguments to this function instead.

Returns: The ‘subprocess.CompletedProcess’ representing either the process

that scheduled the run, or the process that ran the executable if ‘scheduler’ is ‘None’. Returns ‘None’ if no or multiple runs were scheduled.

spectre schedule [OPTIONS] INPUT_FILE_TEMPLATE

Options

-E, --executable <executable>#

The executable to run. Can be a path, or just the name of the executable if it’s in the ‘PATH’. If unspecified, the ‘Executable’ listed in the input file metadata is used.

Default:

executable listed in input file

-o, --run-dir <run_dir>#

The directory to which input file, submit script, etc. are copied, relative to which the executable will run, and to which output files are written. Defaults to the current working directory if the input file is already there. Mutually exclusive with ‘–segments-dir’ / ‘-O’.

-O, --segments-dir <segments_dir>#

The directory in which to create the next segment. Requires ‘–from-checkpoint’ or ‘–from-last-checkpoint’ unless starting the first segment.

--copy-executable, --no-copy-executable#

Copy the executable to the run or segments directory. (1) When no flag is specified: If ‘–run-dir’ / ‘-o’ is set, don’t copy. If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission. (2) When ‘–copy-executable’ is specified: If ‘–run-dir’ / ‘-o’ is set, copy to the run directory. If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission. Still don’t copy to individual segments. (3) When ‘–no-copy-executable’ is specified: Never copy.

-C, --clean-output#

Clean up existing output files in the run directory before running the executable. See the ‘spectre clean-output’ command for details.

-f, --force#

Overwrite existing files in the ‘–run-dir’ / ‘-o’. You may also want to use ‘–clean-output’.

--scheduler <scheduler>#

The scheduler invoked to queue jobs on the machine.

Default:

none

--no-schedule#

Run the executable directly, without scheduling it.

--submit-script-template <submit_script_template>#

Path to a submit script. It will be copied to the ‘run_dir’. It can be a [Jinja template](https://jinja.palletsprojects.com/en/3.0.x/templates/) (see main help text for possible placeholders).

Default:

/__w/spectre/spectre/build/bin/python/spectre/support/SubmitTemplate.sh

-J, --job-name <job_name>#

A short name for the job (see main help text for possible placeholders).

Default:

executable name

-j, -c, --num-procs <num_procs>#

Number of worker threads. Mutually exclusive with ‘–num-nodes’ / ‘-N’.

-N, --num-nodes <num_nodes>#

Number of nodes

--queue <queue>#

Name of the queue.

-t, --time-limit <time_limit>#

Wall time limit. Must be compatible with the chosen queue.

-p, --param <extra_params>#

Forward an additional parameter to the input file and submit script templates. Can be specified multiple times. Each entry must be a ‘key=value’ pair, where the key is the parameter name. The value can be an int, float, string, a comma-separated list, an inclusive range like ‘0…3’, an exclusive range like ‘0..3’ or ‘0..<3’, or an exponentiated value or range like ‘2**3’ or ‘10**4…6’. If a parameter is a list or range, multiple runs are scheduled recursively. You can also use the parameter in the ‘job_name’ and in the ‘run_dir’ or ‘segment_dir’, and when scheduling ranges of runs you probably should.

--submit, --no-submit#

Submit jobs automatically. If neither option is specified, a prompt will ask for confirmation before a job is submitted.

--context-file-name <context_file_name>#

Name of the context file that supports resubmissions.

Default:

SchedulerContext.yaml

--from-checkpoint <from_checkpoint>#

Restart from this checkpoint.

--from-last-checkpoint <from_last_checkpoint>#

Restart from the last checkpoint in this directory.

Arguments

INPUT_FILE_TEMPLATE#

Required argument

simplify-traces#

Process Charm++ Projections trace files

Process Charm++ Projections ‘.sts’ (not ‘.sum.sts’) files to make the entry method and Chare names easier to read in the GUI. Long template names are not rendered fully making it impossible to figure out what Action and Chare was being measured. The standard entry methods like ‘invoke_iterable_action’ and ‘simple_action’ are simplified by default, but further textual and regular expression replacements can be specified in a JSON file.

The output of this command will be written to the ‘OUTPUT_FILE’ if specified, to stdout if unspecified, edited in-place if the ‘-i’ flag is specified. Note that you will need to replace Charm++’s .sts file with the output file and the names must match.

spectre simplify-traces [OPTIONS] PROJECTIONS_FILE [OUTPUT_FILE]

Options

-i, --in-place#

Edit the ‘PROJECTIONS_FILE’ in place. A backup of the file is written to the ‘OUTPUT_FILE’ if specified.

-r, --replacements-json-file <replacements_json_file>#

A JSON file listing textual and regular expression replacements. The file must specify “BasicReplace” and “RegexReplace” dictionaries. Each dictionary has keys that are the name of the replacement (unused in any searches). For BasicReplace the value is a list of two-element lists, the first entry in the nested two-element list is the string to replace and the second what to replace it with. An example entry is:

“Actions::MutateApply”: [[“Actions::MutateApply<”, “], [“>()”, “()”]]

where if the line contains “Actions::MutateApply<” it and “>()” are replaced. The regular expression is structured similarly but the entire regex match is replaced.

Arguments

PROJECTIONS_FILE#

Required argument

OUTPUT_FILE#

Optional argument

status#

Gives an overview of simulations running on this machine.

spectre status [OPTIONS]

Options

-u, --uid, --user <user>#

User name or user ID. See documentation for ‘sacct -u’ for details.

Default:

you

-a, --allusers#

Show jobs for all users. See documentation for ‘sacct -a’ for details.

-p, --show-paths#

Show job working directory and input file paths.

-U, --show-unidentified#

Also show jobs that were not identified as SpECTRE executables.

-D, --show-deleted#

Also show jobs that ran in directories that are now deleted.

-A, --show-all-segments#

Show all segments instead of just the latest.

-s, --state <state>#

Show only jobs with this state, e.g. running (r) or completed (cd). See documentation for ‘sacct -s’ for details.

-S, --starttime <starttime>#

Show jobs eligible after this time, e.g. ‘now-2days’. See documentation for ‘sacct -S’ for details.

Default:

start of today

-w, --watch <refresh_rate>#

On a new screen, refresh jobs every ‘watch’ number of seconds. Exit out with Ctl+C.

--state-styles <state_styles>#

Dictionary between sacct states and rich modifiers for how the state will be printed. Rather than always having to specify a dict on the command line, you can add this to the spectre config file.

An example for the config file would be

status:
state_styles:
RUNNING: ‘[green]’
COMPLETED: ‘[bold][red]’

See spectre -h for its path.

-c, --columns <columns>#

List of columns that will be printed for all jobs (executable specific columns will be added in addition to this list). This can also be specified in the config file with the name ‘columns’. Note that if the ‘–allusers’ option is not specified, then the “User” column will be omitted. Specify the columns as a comma separated list: State,JobId,Nodes . If you want to have spaces in the list, wrap it in single quotes: ‘State, JobId, Nodes’

Default:

State,End,User,JobID,JobName,Elapsed,Cores,Nodes

-e, --available-columns#

Print a list of all available columns to use.

transform-volume-data#

Transform volume data with Python functions

Run Python functions (kernels) over all volume data in the ‘H5FILES’ and write the output data back into the same files. You can use any Python function as kernel that takes tensors as input arguments and returns a tensor (from ‘spectre.DataStructures.Tensor’). The function must be annotated with tensor types, like this:

def shift_magnitude(

shift: tnsr.I[DataVector, 3], spatial_metric: tnsr.ii[DataVector, 3]) -> Scalar[DataVector]:

# …

Any pybind11 binding of a C++ function will also work, as long as it takes only supported types as arguments. Supported types are tensors, as well structural information such as the mesh, coordinates, and Jacobians. See the ‘parse_kernel_arg’ function for all supported argument types, and ‘parse_kernel_output’ for all supported return types.

The kernels can be loaded from any available Python module. Examples of useful kernels:

  • Anything in ‘spectre.PointwiseFunctions’

  • ‘spectre.NumericalAlgorithms.LinearOperators.relative_truncation_error’ and ‘absolute_truncation_error’

You can also execute a Python file that defines kernels with the ‘–exec’ / ‘-e’ option.

By default, the data for the input arguments are read from datasets in the volume files with the same names, transformed to CamelCase. For example, the input dataset names for the ‘shift_magnitude’ function above would be ‘Shift(_x,_y,_z)’ and ‘SpatialMetric(_xx,_yy,_zz,_xy,_xz,_yz)’. That is, the code uses the name ‘shift’ from the function argument, changes it to CamelCase, then reads the ‘Shift(_x,_y,_z)’ datasets into a ‘tnsr.I[DataVector, 3]’ before passing it to the function. You can override the input names with the ‘–input-name’ / ‘-i’ option. The output would be written to a dataset named ‘ShiftMagnitude’, which is the function name transformed to CamelCase.

spectre transform-volume-data [OPTIONS] [H5FILES]...

Options

-d, --subfile-name <subfile_name>#

Name of volume data subfile within each h5 file.

-k, --kernel <kernels>#

Python function to apply to the volume data. Specify as ‘path.to.module.function_name’, where the module must be available to import. Alternatively, specify just ‘function_name’ if the function is defined in one of the ‘–exec’ / ‘-e’ files. Can be specified multiple times.

-e, --exec <exec_files>#

Python file to execute before loading kernels. Load kernels from this file with the ‘–kernel’ / ‘-k’ option. Can be specified multiple times.

-i, --input-name <map_input_names>#

Map of function argument name to dataset name in the volume data file. Specify key-value pair like ‘spatial_metric=SpatialMetric’. Can be specified multiple times. If unspecified, the argument name is transformed to CamelCase.

--integrate#

Compute the volume integral over the kernels instead of writing them back into the data files. Specify ‘–output’ / ‘-o’ to write the integrals to a file.

-o, --output <output>#

Output file for integrals. Either a ‘.txt’ file or a ‘.h5’ file. Also requires the ‘–output-subfile’ option if a ‘.h5’ file is used. Only used if the ‘–integrate’ flag is set.

--output-subfile <output_subfile>#

Subfile name in the ‘–output’ / ‘-o’ file, if it is an H5 file.

-f, --force#

Overwrite existing data.

Arguments

H5FILES#

Optional argument(s)

validate#

Check an input file for parse errors

spectre validate [OPTIONS] INPUT_FILE_PATH

Options

-E, --executable <executable>#

Name or path of the executable. If unspecified, the ‘Executable:’ in the input file metadata is used.

Arguments

INPUT_FILE_PATH#

Required argument