PwBandsWorkChain

PwBandsWorkChain#

workchainaiida_quantumespresso.workflows.pw.bands.PwBandsWorkChain

Workchain to compute a band structure for a given structure using Quantum ESPRESSO pw.x. The logic for the computation of various parameters for the BANDS step is as follows: Number of bands: One can specify the number of bands to be used in the BANDS step either directly through the input parameters `bands.pw.parameters.SYSTEM.nbnd` or through `nbands_factor`. Note that specifying both is not allowed. When neither is specified nothing will be set by the work chain and the default of Quantum ESPRESSO will end up being used. If the `nbands_factor` is specified the maximum value of the following values will be used: * `nbnd` of the preceding SCF calculation * 0.5 * nelectrons * nbands_factor * 0.5 * nelectrons + 4 Kpoints: There are three options; specify either an existing `KpointsData` through `bands_kpoints`, or specify the `bands_kpoint_distance`, or specify neither. For the former those exact kpoints will be used for the BANDS step. In the two other cases, the structure will first be normalized using SeekPath and the path along high-symmetry k-points will be generated on that structure. The distance between kpoints for the path will be equal to that of `bands_kpoints_distance` or the SeekPath default if not specified.

Inputs:

  • bands, Namespace – Inputs for the PwBaseWorkChain for the BANDS calculation.
    • handler_overrides, (Dict, NoneType), optional – Mapping where keys are process handler names and the values are a dictionary, where each dictionary can define the enabled and priority key, which can be used to toggle the values set on the original process handler declaration.
    • kpoints, (KpointsData, NoneType), optional – An explicit k-points list or mesh. Either this or kpoints_distance has to be provided.
    • kpoints_distance, (Float, NoneType), optional – The minimum desired distance in 1/Å between k-points in reciprocal space. The explicit k-points will be generated automatically by a calculation function based on the input structure.
    • kpoints_force_parity, (Bool, NoneType), optional – Optional input when constructing the k-points based on a desired kpoints_distance. Setting this to True will force the k-point mesh to have an even number of points along each lattice vector except for any non-periodic directions.
    • max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
    • metadata, Namespace
      • call_link_label, str, optional, is_metadata – The label to use for the CALL link if the process is called by another process.
      • description, (str, NoneType), optional, is_metadata – Description to set on the process node.
      • label, (str, NoneType), optional, is_metadata – Label to set on the process node.
      • store_provenance, bool, optional, is_metadata – If set to False provenance will not be stored in the database.
    • pw, Namespace
      • code, (AbstractCode, NoneType), optional – The Code to use for this job. This input is required, unless the remote_folder input is specified, which means an existing job is being imported and no code will actually be run.
      • hubbard_file, (SinglefileData, NoneType), optional – SinglefileData node containing the output Hubbard parameters from a HpCalculation
      • metadata, Namespace
        • call_link_label, str, optional, is_metadata – The label to use for the CALL link if the process is called by another process.
        • computer, (Computer, NoneType), optional, is_metadata – When using a “local” code, set the computer on which the calculation should be run.
        • description, (str, NoneType), optional, is_metadata – Description to set on the process node.
        • dry_run, bool, optional, is_metadata – When set to True will prepare the calculation job for submission but not actually launch it.
        • label, (str, NoneType), optional, is_metadata – Label to set on the process node.
        • options, Namespace
          • account, (str, NoneType), optional, is_metadata – Set the account to use in for the queue on the remote computer
          • additional_retrieve_list, (list, tuple, NoneType), optional, is_metadata – List of relative file paths that should be retrieved in addition to what the plugin specifies.
          • append_text, str, optional, is_metadata – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
          • custom_scheduler_commands, str, optional, is_metadata – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
          • environment_variables, dict, optional, is_metadata – Set a dictionary of custom environment variables for this calculation
          • environment_variables_double_quotes, bool, optional, is_metadata – If set to True, use double quotes instead of single quotes to escape the environment variables specified in environment_variables.
          • import_sys_environment, bool, optional, is_metadata – If set to true, the submission script will load the system environment variables
          • input_filename, str, optional, is_metadata
          • max_memory_kb, (int, NoneType), optional, is_metadata – Set the maximum memory (in KiloBytes) to be asked to the scheduler
          • max_wallclock_seconds, (int, NoneType), optional, is_metadata – Set the wallclock in seconds asked to the scheduler
          • mpirun_extra_params, (list, tuple), optional, is_metadata – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
          • output_filename, str, optional, is_metadata
          • parser_name, str, optional, is_metadata
          • prepend_text, str, optional, is_metadata – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
          • priority, (str, NoneType), optional, is_metadata – Set the priority of the job to be queued
          • qos, (str, NoneType), optional, is_metadata – Set the quality of service to use in for the queue on the remote computer
          • queue_name, (str, NoneType), optional, is_metadata – Set the name of the queue on the remote computer
          • rerunnable, (bool, NoneType), optional, is_metadata – Determines if the calculation can be requeued / rerun.
          • resources, dict, required, is_metadata – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
          • scheduler_stderr, str, optional, is_metadata – Filename to which the content of stderr of the scheduler is written.
          • scheduler_stdout, str, optional, is_metadata – Filename to which the content of stdout of the scheduler is written.
          • stash, Namespace – Optional directives to stash files after the calculation job has completed.
            • source_list, (tuple, list, NoneType), optional, is_metadata – Sequence of relative filepaths representing files in the remote directory that should be stashed.
            • stash_mode, (str, NoneType), optional, is_metadata – Mode with which to perform the stashing, should be value of aiida.common.datastructures.StashMode.
            • target_base, (str, NoneType), optional, is_metadata – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
          • submit_script_filename, str, optional, is_metadata – Filename to which the job submission script is written.
          • withmpi, bool, optional, is_metadata
          • without_xml, (bool, NoneType), optional, is_metadata – If set to True the parser will not fail if the XML file is missing in the retrieved folder.
        • store_provenance, bool, optional, is_metadata – If set to False provenance will not be stored in the database.
      • monitors, Namespace – Add monitoring functions that can inspect output files while the job is running and decide to prematurely terminate the job.
      • parallelization, (Dict, NoneType), optional – Parallelization options. The following flags are allowed: npool : The number of ‘pools’, each taking care of a group of k-points. nband : The number of ‘band groups’, each taking care of a group of Kohn-Sham orbitals. ntg : The number of ‘task groups’ across which the FFT planes are distributed. ndiag : The number of ‘linear algebra groups’ used when parallelizing the subspace diagonalization / iterative orthonormalization. By default, no parameter is passed to Quantum ESPRESSO, meaning it will use its default.
      • parameters, Dict, required – The input parameters that are to be used to construct the input file.
      • pseudos, Namespace – A mapping of UpfData nodes onto the kind name to which they should apply.
      • remote_folder, (RemoteData, NoneType), optional – Remote directory containing the results of an already completed calculation job without AiiDA. The inputs should be passed to the CalcJob as normal but instead of launching the actual job, the engine will recreate the input files and then proceed straight to the retrieve step where the files of this RemoteData will be retrieved as if it had been actually launched through AiiDA. If a parser is defined in the inputs, the results are parsed and attached as output nodes as usual.
      • settings, (Dict, NoneType), optional – Optional parameters to affect the way the calculation job and the parsing are performed.
      • vdw_table, (SinglefileData, NoneType), optional – Optional van der Waals table contained in a SinglefileData.
  • bands_kpoints, (KpointsData, NoneType), optional – Explicit kpoints to use for the BANDS calculation. Specify either this or bands_kpoints_distance.
  • bands_kpoints_distance, (Float, NoneType), optional – Minimum kpoints distance for the BANDS calculation. Specify either this or bands_kpoints.
  • clean_workdir, Bool, optional – If True, work directories of all called calculation will be cleaned at the end of execution.
  • metadata, Namespace
    • call_link_label, str, optional, is_metadata – The label to use for the CALL link if the process is called by another process.
    • description, (str, NoneType), optional, is_metadata – Description to set on the process node.
    • label, (str, NoneType), optional, is_metadata – Label to set on the process node.
    • store_provenance, bool, optional, is_metadata – If set to False provenance will not be stored in the database.
  • nbands_factor, (Float, NoneType), optional – The number of bands for the BANDS calculation is that used for the SCF multiplied by this factor.
  • relax, Namespace – Inputs for the PwRelaxWorkChain, if not specified at all, the relaxation step is skipped.
    • base, Namespace – Inputs for the PwBaseWorkChain for the main relax loop.
      • handler_overrides, (Dict, NoneType), optional – Mapping where keys are process handler names and the values are a dictionary, where each dictionary can define the enabled and priority key, which can be used to toggle the values set on the original process handler declaration.
      • kpoints, (KpointsData, NoneType), optional – An explicit k-points list or mesh. Either this or kpoints_distance has to be provided.
      • kpoints_distance, (Float, NoneType), optional – The minimum desired distance in 1/Å between k-points in reciprocal space. The explicit k-points will be generated automatically by a calculation function based on the input structure.
      • kpoints_force_parity, (Bool, NoneType), optional – Optional input when constructing the k-points based on a desired kpoints_distance. Setting this to True will force the k-point mesh to have an even number of points along each lattice vector except for any non-periodic directions.
      • max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
      • metadata, Namespace
        • call_link_label, str, optional, is_metadata – The label to use for the CALL link if the process is called by another process.
        • description, (str, NoneType), optional, is_metadata – Description to set on the process node.
        • label, (str, NoneType), optional, is_metadata – Label to set on the process node.
        • store_provenance, bool, optional, is_metadata – If set to False provenance will not be stored in the database.
      • pw, Namespace
        • code, (AbstractCode, NoneType), optional – The Code to use for this job. This input is required, unless the remote_folder input is specified, which means an existing job is being imported and no code will actually be run.
        • hubbard_file, (SinglefileData, NoneType), optional – SinglefileData node containing the output Hubbard parameters from a HpCalculation
        • metadata, Namespace
          • call_link_label, str, optional, is_metadata – The label to use for the CALL link if the process is called by another process.
          • computer, (Computer, NoneType), optional, is_metadata – When using a “local” code, set the computer on which the calculation should be run.
          • description, (str, NoneType), optional, is_metadata – Description to set on the process node.
          • dry_run, bool, optional, is_metadata – When set to True will prepare the calculation job for submission but not actually launch it.
          • label, (str, NoneType), optional, is_metadata – Label to set on the process node.
          • options, Namespace
            • account, (str, NoneType), optional, is_metadata – Set the account to use in for the queue on the remote computer
            • additional_retrieve_list, (list, tuple, NoneType), optional, is_metadata – List of relative file paths that should be retrieved in addition to what the plugin specifies.
            • append_text, str, optional, is_metadata – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
            • custom_scheduler_commands, str, optional, is_metadata – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
            • environment_variables, dict, optional, is_metadata – Set a dictionary of custom environment variables for this calculation
            • environment_variables_double_quotes, bool, optional, is_metadata – If set to True, use double quotes instead of single quotes to escape the environment variables specified in environment_variables.
            • import_sys_environment, bool, optional, is_metadata – If set to true, the submission script will load the system environment variables
            • input_filename, str, optional, is_metadata
            • max_memory_kb, (int, NoneType), optional, is_metadata – Set the maximum memory (in KiloBytes) to be asked to the scheduler
            • max_wallclock_seconds, (int, NoneType), optional, is_metadata – Set the wallclock in seconds asked to the scheduler
            • mpirun_extra_params, (list, tuple), optional, is_metadata – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
            • output_filename, str, optional, is_metadata
            • parser_name, str, optional, is_metadata
            • prepend_text, str, optional, is_metadata – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
            • priority, (str, NoneType), optional, is_metadata – Set the priority of the job to be queued
            • qos, (str, NoneType), optional, is_metadata – Set the quality of service to use in for the queue on the remote computer
            • queue_name, (str, NoneType), optional, is_metadata – Set the name of the queue on the remote computer
            • rerunnable, (bool, NoneType), optional, is_metadata – Determines if the calculation can be requeued / rerun.
            • resources, dict, required, is_metadata – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
            • scheduler_stderr, str, optional, is_metadata – Filename to which the content of stderr of the scheduler is written.
            • scheduler_stdout, str, optional, is_metadata – Filename to which the content of stdout of the scheduler is written.
            • stash, Namespace – Optional directives to stash files after the calculation job has completed.
              • source_list, (tuple, list, NoneType), optional, is_metadata – Sequence of relative filepaths representing files in the remote directory that should be stashed.
              • stash_mode, (str, NoneType), optional, is_metadata – Mode with which to perform the stashing, should be value of aiida.common.datastructures.StashMode.
              • target_base, (str, NoneType), optional, is_metadata – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
            • submit_script_filename, str, optional, is_metadata – Filename to which the job submission script is written.
            • withmpi, bool, optional, is_metadata
            • without_xml, (bool, NoneType), optional, is_metadata – If set to True the parser will not fail if the XML file is missing in the retrieved folder.
          • store_provenance, bool, optional, is_metadata – If set to False provenance will not be stored in the database.
        • monitors, Namespace – Add monitoring functions that can inspect output files while the job is running and decide to prematurely terminate the job.
        • parallelization, (Dict, NoneType), optional – Parallelization options. The following flags are allowed: npool : The number of ‘pools’, each taking care of a group of k-points. nband : The number of ‘band groups’, each taking care of a group of Kohn-Sham orbitals. ntg : The number of ‘task groups’ across which the FFT planes are distributed. ndiag : The number of ‘linear algebra groups’ used when parallelizing the subspace diagonalization / iterative orthonormalization. By default, no parameter is passed to Quantum ESPRESSO, meaning it will use its default.
        • parameters, Dict, required – The input parameters that are to be used to construct the input file.
        • pseudos, Namespace – A mapping of UpfData nodes onto the kind name to which they should apply.
        • remote_folder, (RemoteData, NoneType), optional – Remote directory containing the results of an already completed calculation job without AiiDA. The inputs should be passed to the CalcJob as normal but instead of launching the actual job, the engine will recreate the input files and then proceed straight to the retrieve step where the files of this RemoteData will be retrieved as if it had been actually launched through AiiDA. If a parser is defined in the inputs, the results are parsed and attached as output nodes as usual.
        • settings, (Dict, NoneType), optional – Optional parameters to affect the way the calculation job and the parsing are performed.
        • vdw_table, (SinglefileData, NoneType), optional – Optional van der Waals table contained in a SinglefileData.
    • base_final_scf, Namespace – Inputs for the PwBaseWorkChain for the final scf.
      • handler_overrides, (Dict, NoneType), optional – Mapping where keys are process handler names and the values are a dictionary, where each dictionary can define the enabled and priority key, which can be used to toggle the values set on the original process handler declaration.
      • kpoints, (KpointsData, NoneType), optional – An explicit k-points list or mesh. Either this or kpoints_distance has to be provided.
      • kpoints_distance, (Float, NoneType), optional – The minimum desired distance in 1/Å between k-points in reciprocal space. The explicit k-points will be generated automatically by a calculation function based on the input structure.
      • kpoints_force_parity, (Bool, NoneType), optional – Optional input when constructing the k-points based on a desired kpoints_distance. Setting this to True will force the k-point mesh to have an even number of points along each lattice vector except for any non-periodic directions.
      • max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
      • metadata, Namespace
        • call_link_label, str, optional, is_metadata – The label to use for the CALL link if the process is called by another process.
        • description, (str, NoneType), optional, is_metadata – Description to set on the process node.
        • label, (str, NoneType), optional, is_metadata – Label to set on the process node.
        • store_provenance, bool, optional, is_metadata – If set to False provenance will not be stored in the database.
      • pw, Namespace
        • code, (AbstractCode, NoneType), optional – The Code to use for this job. This input is required, unless the remote_folder input is specified, which means an existing job is being imported and no code will actually be run.
        • hubbard_file, (SinglefileData, NoneType), optional – SinglefileData node containing the output Hubbard parameters from a HpCalculation
        • metadata, Namespace
          • call_link_label, str, optional, is_metadata – The label to use for the CALL link if the process is called by another process.
          • computer, (Computer, NoneType), optional, is_metadata – When using a “local” code, set the computer on which the calculation should be run.
          • description, (str, NoneType), optional, is_metadata – Description to set on the process node.
          • dry_run, bool, optional, is_metadata – When set to True will prepare the calculation job for submission but not actually launch it.
          • label, (str, NoneType), optional, is_metadata – Label to set on the process node.
          • options, Namespace
            • account, (str, NoneType), optional, is_metadata – Set the account to use in for the queue on the remote computer
            • additional_retrieve_list, (list, tuple, NoneType), optional, is_metadata – List of relative file paths that should be retrieved in addition to what the plugin specifies.
            • append_text, str, optional, is_metadata – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
            • custom_scheduler_commands, str, optional, is_metadata – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
            • environment_variables, dict, optional, is_metadata – Set a dictionary of custom environment variables for this calculation
            • environment_variables_double_quotes, bool, optional, is_metadata – If set to True, use double quotes instead of single quotes to escape the environment variables specified in environment_variables.
            • import_sys_environment, bool, optional, is_metadata – If set to true, the submission script will load the system environment variables
            • input_filename, str, optional, is_metadata
            • max_memory_kb, (int, NoneType), optional, is_metadata – Set the maximum memory (in KiloBytes) to be asked to the scheduler
            • max_wallclock_seconds, (int, NoneType), optional, is_metadata – Set the wallclock in seconds asked to the scheduler
            • mpirun_extra_params, (list, tuple), optional, is_metadata – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
            • output_filename, str, optional, is_metadata
            • parser_name, str, optional, is_metadata
            • prepend_text, str, optional, is_metadata – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
            • priority, (str, NoneType), optional, is_metadata – Set the priority of the job to be queued
            • qos, (str, NoneType), optional, is_metadata – Set the quality of service to use in for the queue on the remote computer
            • queue_name, (str, NoneType), optional, is_metadata – Set the name of the queue on the remote computer
            • rerunnable, (bool, NoneType), optional, is_metadata – Determines if the calculation can be requeued / rerun.
            • resources, dict, required, is_metadata – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
            • scheduler_stderr, str, optional, is_metadata – Filename to which the content of stderr of the scheduler is written.
            • scheduler_stdout, str, optional, is_metadata – Filename to which the content of stdout of the scheduler is written.
            • stash, Namespace – Optional directives to stash files after the calculation job has completed.
              • source_list, (tuple, list, NoneType), optional, is_metadata – Sequence of relative filepaths representing files in the remote directory that should be stashed.
              • stash_mode, (str, NoneType), optional, is_metadata – Mode with which to perform the stashing, should be value of aiida.common.datastructures.StashMode.
              • target_base, (str, NoneType), optional, is_metadata – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
            • submit_script_filename, str, optional, is_metadata – Filename to which the job submission script is written.
            • withmpi, bool, optional, is_metadata
            • without_xml, (bool, NoneType), optional, is_metadata – If set to True the parser will not fail if the XML file is missing in the retrieved folder.
          • store_provenance, bool, optional, is_metadata – If set to False provenance will not be stored in the database.
        • monitors, Namespace – Add monitoring functions that can inspect output files while the job is running and decide to prematurely terminate the job.
        • parallelization, (Dict, NoneType), optional – Parallelization options. The following flags are allowed: npool : The number of ‘pools’, each taking care of a group of k-points. nband : The number of ‘band groups’, each taking care of a group of Kohn-Sham orbitals. ntg : The number of ‘task groups’ across which the FFT planes are distributed. ndiag : The number of ‘linear algebra groups’ used when parallelizing the subspace diagonalization / iterative orthonormalization. By default, no parameter is passed to Quantum ESPRESSO, meaning it will use its default.
        • parameters, Dict, required – The input parameters that are to be used to construct the input file.
        • pseudos, Namespace – A mapping of UpfData nodes onto the kind name to which they should apply.
        • remote_folder, (RemoteData, NoneType), optional – Remote directory containing the results of an already completed calculation job without AiiDA. The inputs should be passed to the CalcJob as normal but instead of launching the actual job, the engine will recreate the input files and then proceed straight to the retrieve step where the files of this RemoteData will be retrieved as if it had been actually launched through AiiDA. If a parser is defined in the inputs, the results are parsed and attached as output nodes as usual.
        • settings, (Dict, NoneType), optional – Optional parameters to affect the way the calculation job and the parsing are performed.
        • vdw_table, (SinglefileData, NoneType), optional – Optional van der Waals table contained in a SinglefileData.
    • max_meta_convergence_iterations, Int, optional – The maximum number of variable cell relax iterations in the meta convergence cycle.
    • meta_convergence, Bool, optional – If True the workchain will perform a meta-convergence on the cell volume.
    • metadata, Namespace
      • call_link_label, str, optional, is_metadata – The label to use for the CALL link if the process is called by another process.
      • description, (str, NoneType), optional, is_metadata – Description to set on the process node.
      • label, (str, NoneType), optional, is_metadata – Label to set on the process node.
      • store_provenance, bool, optional, is_metadata – If set to False provenance will not be stored in the database.
    • volume_convergence, Float, optional – The volume difference threshold between two consecutive meta convergence iterations.
  • scf, Namespace – Inputs for the PwBaseWorkChain for the SCF calculation.
    • handler_overrides, (Dict, NoneType), optional – Mapping where keys are process handler names and the values are a dictionary, where each dictionary can define the enabled and priority key, which can be used to toggle the values set on the original process handler declaration.
    • kpoints, (KpointsData, NoneType), optional – An explicit k-points list or mesh. Either this or kpoints_distance has to be provided.
    • kpoints_distance, (Float, NoneType), optional – The minimum desired distance in 1/Å between k-points in reciprocal space. The explicit k-points will be generated automatically by a calculation function based on the input structure.
    • kpoints_force_parity, (Bool, NoneType), optional – Optional input when constructing the k-points based on a desired kpoints_distance. Setting this to True will force the k-point mesh to have an even number of points along each lattice vector except for any non-periodic directions.
    • max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
    • metadata, Namespace
      • call_link_label, str, optional, is_metadata – The label to use for the CALL link if the process is called by another process.
      • description, (str, NoneType), optional, is_metadata – Description to set on the process node.
      • label, (str, NoneType), optional, is_metadata – Label to set on the process node.
      • store_provenance, bool, optional, is_metadata – If set to False provenance will not be stored in the database.
    • pw, Namespace
      • code, (AbstractCode, NoneType), optional – The Code to use for this job. This input is required, unless the remote_folder input is specified, which means an existing job is being imported and no code will actually be run.
      • hubbard_file, (SinglefileData, NoneType), optional – SinglefileData node containing the output Hubbard parameters from a HpCalculation
      • metadata, Namespace
        • call_link_label, str, optional, is_metadata – The label to use for the CALL link if the process is called by another process.
        • computer, (Computer, NoneType), optional, is_metadata – When using a “local” code, set the computer on which the calculation should be run.
        • description, (str, NoneType), optional, is_metadata – Description to set on the process node.
        • dry_run, bool, optional, is_metadata – When set to True will prepare the calculation job for submission but not actually launch it.
        • label, (str, NoneType), optional, is_metadata – Label to set on the process node.
        • options, Namespace
          • account, (str, NoneType), optional, is_metadata – Set the account to use in for the queue on the remote computer
          • additional_retrieve_list, (list, tuple, NoneType), optional, is_metadata – List of relative file paths that should be retrieved in addition to what the plugin specifies.
          • append_text, str, optional, is_metadata – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
          • custom_scheduler_commands, str, optional, is_metadata – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
          • environment_variables, dict, optional, is_metadata – Set a dictionary of custom environment variables for this calculation
          • environment_variables_double_quotes, bool, optional, is_metadata – If set to True, use double quotes instead of single quotes to escape the environment variables specified in environment_variables.
          • import_sys_environment, bool, optional, is_metadata – If set to true, the submission script will load the system environment variables
          • input_filename, str, optional, is_metadata
          • max_memory_kb, (int, NoneType), optional, is_metadata – Set the maximum memory (in KiloBytes) to be asked to the scheduler
          • max_wallclock_seconds, (int, NoneType), optional, is_metadata – Set the wallclock in seconds asked to the scheduler
          • mpirun_extra_params, (list, tuple), optional, is_metadata – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
          • output_filename, str, optional, is_metadata
          • parser_name, str, optional, is_metadata
          • prepend_text, str, optional, is_metadata – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
          • priority, (str, NoneType), optional, is_metadata – Set the priority of the job to be queued
          • qos, (str, NoneType), optional, is_metadata – Set the quality of service to use in for the queue on the remote computer
          • queue_name, (str, NoneType), optional, is_metadata – Set the name of the queue on the remote computer
          • rerunnable, (bool, NoneType), optional, is_metadata – Determines if the calculation can be requeued / rerun.
          • resources, dict, required, is_metadata – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
          • scheduler_stderr, str, optional, is_metadata – Filename to which the content of stderr of the scheduler is written.
          • scheduler_stdout, str, optional, is_metadata – Filename to which the content of stdout of the scheduler is written.
          • stash, Namespace – Optional directives to stash files after the calculation job has completed.
            • source_list, (tuple, list, NoneType), optional, is_metadata – Sequence of relative filepaths representing files in the remote directory that should be stashed.
            • stash_mode, (str, NoneType), optional, is_metadata – Mode with which to perform the stashing, should be value of aiida.common.datastructures.StashMode.
            • target_base, (str, NoneType), optional, is_metadata – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
          • submit_script_filename, str, optional, is_metadata – Filename to which the job submission script is written.
          • withmpi, bool, optional, is_metadata
          • without_xml, (bool, NoneType), optional, is_metadata – If set to True the parser will not fail if the XML file is missing in the retrieved folder.
        • store_provenance, bool, optional, is_metadata – If set to False provenance will not be stored in the database.
      • monitors, Namespace – Add monitoring functions that can inspect output files while the job is running and decide to prematurely terminate the job.
      • parallelization, (Dict, NoneType), optional – Parallelization options. The following flags are allowed: npool : The number of ‘pools’, each taking care of a group of k-points. nband : The number of ‘band groups’, each taking care of a group of Kohn-Sham orbitals. ntg : The number of ‘task groups’ across which the FFT planes are distributed. ndiag : The number of ‘linear algebra groups’ used when parallelizing the subspace diagonalization / iterative orthonormalization. By default, no parameter is passed to Quantum ESPRESSO, meaning it will use its default.
      • parameters, Dict, required – The input parameters that are to be used to construct the input file.
      • parent_folder, (RemoteData, NoneType), optional – An optional working directory of a previously completed calculation to restart from.
      • pseudos, Namespace – A mapping of UpfData nodes onto the kind name to which they should apply.
      • remote_folder, (RemoteData, NoneType), optional – Remote directory containing the results of an already completed calculation job without AiiDA. The inputs should be passed to the CalcJob as normal but instead of launching the actual job, the engine will recreate the input files and then proceed straight to the retrieve step where the files of this RemoteData will be retrieved as if it had been actually launched through AiiDA. If a parser is defined in the inputs, the results are parsed and attached as output nodes as usual.
      • settings, (Dict, NoneType), optional – Optional parameters to affect the way the calculation job and the parsing are performed.
      • vdw_table, (SinglefileData, NoneType), optional – Optional van der Waals table contained in a SinglefileData.
  • structure, StructureData, required – The inputs structure.

Outputs:

  • band_parameters, Dict, required – The output parameters of the BANDS PwBaseWorkChain.
  • band_structure, BandsData, required – The computed band structure.
  • primitive_structure, StructureData, optional – The normalized and primitivized structure for which the bands are computed.
  • scf_parameters, Dict, required – The output parameters of the SCF PwBaseWorkChain.
  • seekpath_parameters, Dict, optional – The parameters used in the SeeKpath call to normalize the input or relaxed structure.

Outline:

setup(Define the current structure in the context to be the input structure.)
if(should_run_relax)
    run_relax(Run the PwRelaxWorkChain to run a relax PwCalculation.)
    inspect_relax(Verify that the PwRelaxWorkChain finished successfully.)
if(should_run_seekpath)
    run_seekpath(Run the structure through SeeKpath to get the normalized structure and path along high-symmetry k-points . This is only called if the `bands_kpoints` input was not specified.)
run_scf(Run the PwBaseWorkChain in scf mode on the primitive cell of (optionally relaxed) input structure.)
inspect_scf(Verify that the PwBaseWorkChain for the scf run finished successfully.)
run_bands(Run the PwBaseWorkChain in bands mode along the path of high-symmetry determined by seekpath.)
inspect_bands(Verify that the PwBaseWorkChain for the bands run finished successfully.)
results(Attach the desired output nodes directly as outputs of the workchain.)