MatdynBaseWorkChain

MatdynBaseWorkChain#

workchainaiida_quantumespresso.workflows.matdyn.base.MatdynBaseWorkChain

Workchain to run a Quantum ESPRESSO matdyn.x calculation with automated error handling and restarts.

Inputs:

  • clean_workdir, Bool, optional – If True, work directories of all called calculation jobs will be cleaned at the end of execution.
  • handler_overrides, (Dict, NoneType), optional – Mapping where keys are process handler names and the values are a dictionary, where each dictionary can define the enabled and priority key, which can be used to toggle the values set on the original process handler declaration.
  • matdyn, Namespace
    • code, (AbstractCode, NoneType), optional – The Code to use for this job. This input is required, unless the remote_folder input is specified, which means an existing job is being imported and no code will actually be run.
    • force_constants, ForceConstantsData, required
    • kpoints, KpointsData, required – Kpoints on which to calculate the phonon frequencies.
    • metadata, Namespace
      • call_link_label, str, optional, is_metadata – The label to use for the CALL link if the process is called by another process.
      • computer, (Computer, NoneType), optional, is_metadata – When using a “local” code, set the computer on which the calculation should be run.
      • description, (str, NoneType), optional, is_metadata – Description to set on the process node.
      • dry_run, bool, optional, is_metadata – When set to True will prepare the calculation job for submission but not actually launch it.
      • label, (str, NoneType), optional, is_metadata – Label to set on the process node.
      • options, Namespace
        • account, (str, NoneType), optional, is_metadata – Set the account to use in for the queue on the remote computer
        • additional_retrieve_list, (list, tuple, NoneType), optional, is_metadata – List of relative file paths that should be retrieved in addition to what the plugin specifies.
        • append_text, str, optional, is_metadata – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
        • custom_scheduler_commands, str, optional, is_metadata – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
        • environment_variables, dict, optional, is_metadata – Set a dictionary of custom environment variables for this calculation
        • environment_variables_double_quotes, bool, optional, is_metadata – If set to True, use double quotes instead of single quotes to escape the environment variables specified in environment_variables.
        • import_sys_environment, bool, optional, is_metadata – If set to true, the submission script will load the system environment variables
        • input_filename, str, optional, is_metadata
        • max_memory_kb, (int, NoneType), optional, is_metadata – Set the maximum memory (in KiloBytes) to be asked to the scheduler
        • max_wallclock_seconds, (int, NoneType), optional, is_metadata – Set the wallclock in seconds asked to the scheduler
        • mpirun_extra_params, (list, tuple), optional, is_metadata – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
        • output_filename, str, optional, is_metadata
        • parser_name, str, optional, is_metadata
        • prepend_text, str, optional, is_metadata – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
        • priority, (str, NoneType), optional, is_metadata – Set the priority of the job to be queued
        • qos, (str, NoneType), optional, is_metadata – Set the quality of service to use in for the queue on the remote computer
        • queue_name, (str, NoneType), optional, is_metadata – Set the name of the queue on the remote computer
        • rerunnable, (bool, NoneType), optional, is_metadata – Determines if the calculation can be requeued / rerun.
        • resources, dict, required, is_metadata – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
        • scheduler_stderr, str, optional, is_metadata – Filename to which the content of stderr of the scheduler is written.
        • scheduler_stdout, str, optional, is_metadata – Filename to which the content of stdout of the scheduler is written.
        • stash, Namespace – Optional directives to stash files after the calculation job has completed.
          • source_list, (tuple, list, NoneType), optional, is_metadata – Sequence of relative filepaths representing files in the remote directory that should be stashed.
          • stash_mode, (str, NoneType), optional, is_metadata – Mode with which to perform the stashing, should be value of aiida.common.datastructures.StashMode.
          • target_base, (str, NoneType), optional, is_metadata – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
        • submit_script_filename, str, optional, is_metadata – Filename to which the job submission script is written.
        • withmpi, bool, optional, is_metadata
      • store_provenance, bool, optional, is_metadata – If set to False provenance will not be stored in the database.
    • monitors, Namespace – Add monitoring functions that can inspect output files while the job is running and decide to prematurely terminate the job.
    • parameters, (Dict, NoneType), optional – Parameters for the namelists in the input file.
    • parent_folder, (RemoteData, FolderData, SinglefileData, NoneType), optional – Use a local or remote folder as parent folder (for restarts and similar)
    • remote_folder, (RemoteData, NoneType), optional – Remote directory containing the results of an already completed calculation job without AiiDA. The inputs should be passed to the CalcJob as normal but instead of launching the actual job, the engine will recreate the input files and then proceed straight to the retrieve step where the files of this RemoteData will be retrieved as if it had been actually launched through AiiDA. If a parser is defined in the inputs, the results are parsed and attached as output nodes as usual.
    • settings, (Dict, NoneType), optional – Use an additional node for special settings
  • max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
  • metadata, Namespace
    • call_link_label, str, optional, is_metadata – The label to use for the CALL link if the process is called by another process.
    • description, (str, NoneType), optional, is_metadata – Description to set on the process node.
    • label, (str, NoneType), optional, is_metadata – Label to set on the process node.
    • store_provenance, bool, optional, is_metadata – If set to False provenance will not be stored in the database.

Outputs:

  • output_parameters, Dict, required
  • output_phonon_bands, BandsData, required
  • remote_folder, RemoteData, required – Input files necessary to run the process will be stored in this folder node.
  • remote_stash, RemoteStashData, optional – Contents of the stash.source_list option are stored in this remote folder after job completion.
  • retrieved, FolderData, required – Files that are retrieved by the daemon will be stored in this node. By default the stdout and stderr of the scheduler will be added, but one can add more by specifying them in CalcInfo.retrieve_list.

Outline:

setup(Call the `setup` of the `BaseRestartWorkChain` and then create the inputs dictionary in `self.ctx.inputs`. This `self.ctx.inputs` dictionary will be used by the `BaseRestartWorkChain` to submit the calculations in the internal loop.)
while(should_run_process)
    run_process(Run the next process, taking the input dictionary from the context at `self.ctx.inputs`.)
    inspect_process(Analyse the results of the previous process and call the handlers when necessary. If the process is excepted or killed, the work chain will abort. Otherwise any attached handlers will be called in order of their specified priority. If the process was failed and no handler returns a report indicating that the error was handled, it is considered an unhandled process failure and the process is relaunched. If this happens twice in a row, the work chain is aborted. In the case that at least one handler returned a report the following matrix determines the logic that is followed: Process  Handler    Handler     Action result   report?    exit code ----------------------------------------- Success      yes        == 0     Restart Success      yes        != 0     Abort Failed       yes        == 0     Restart Failed       yes        != 0     Abort If no handler returned a report and the process finished successfully, the work chain's work is considered done and it will move on to the next step that directly follows the `while` conditional, if there is one defined in the outline.)
results(Attach the outputs specified in the output specification from the last completed process.)