PdosWorkChain

PdosWorkChain#

workchainaiida_quantumespresso.workflows.pdos.PdosWorkChain

A WorkChain to compute Total & Partial Density of States of a structure, using Quantum Espresso.

Inputs:

  • align_to_fermi, Bool, optional – If true, Emin=>Emin-Efermi & Emax=>Emax-Efermi, where Efermi is taken from the nscf calculation. Note that it only makes sense to align Emax and Emin to the fermi level in case they are actually provided by in the dos and projwfc inputs, since otherwise the
  • clean_workdir, Bool, optional – If True, work directories of all called calculation will be cleaned at the end of execution.
  • dos, Namespace – Input parameters for the dos.x calculation. Note that the Emin, Emax and DeltaE values have to match with those in the projwfc inputs.
    • code, (AbstractCode, NoneType), optional – The Code to use for this job. This input is required, unless the remote_folder input is specified, which means an existing job is being imported and no code will actually be run.
    • metadata, Namespace
      • call_link_label, str, optional, is_metadata – The label to use for the CALL link if the process is called by another process.
      • computer, (Computer, NoneType), optional, is_metadata – When using a “local” code, set the computer on which the calculation should be run.
      • description, (str, NoneType), optional, is_metadata – Description to set on the process node.
      • dry_run, bool, optional, is_metadata – When set to True will prepare the calculation job for submission but not actually launch it.
      • label, (str, NoneType), optional, is_metadata – Label to set on the process node.
      • options, Namespace
        • account, (str, NoneType), optional, is_metadata – Set the account to use in for the queue on the remote computer
        • additional_retrieve_list, (list, tuple, NoneType), optional, is_metadata – List of relative file paths that should be retrieved in addition to what the plugin specifies.
        • append_text, str, optional, is_metadata – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
        • custom_scheduler_commands, str, optional, is_metadata – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
        • environment_variables, dict, optional, is_metadata – Set a dictionary of custom environment variables for this calculation
        • environment_variables_double_quotes, bool, optional, is_metadata – If set to True, use double quotes instead of single quotes to escape the environment variables specified in environment_variables.
        • import_sys_environment, bool, optional, is_metadata – If set to true, the submission script will load the system environment variables
        • input_filename, str, optional, is_metadata
        • max_memory_kb, (int, NoneType), optional, is_metadata – Set the maximum memory (in KiloBytes) to be asked to the scheduler
        • max_wallclock_seconds, (int, NoneType), optional, is_metadata – Set the wallclock in seconds asked to the scheduler
        • mpirun_extra_params, (list, tuple), optional, is_metadata – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
        • output_filename, str, optional, is_metadata
        • parser_name, str, optional, is_metadata
        • prepend_text, str, optional, is_metadata – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
        • priority, (str, NoneType), optional, is_metadata – Set the priority of the job to be queued
        • qos, (str, NoneType), optional, is_metadata – Set the quality of service to use in for the queue on the remote computer
        • queue_name, (str, NoneType), optional, is_metadata – Set the name of the queue on the remote computer
        • rerunnable, (bool, NoneType), optional, is_metadata – Determines if the calculation can be requeued / rerun.
        • resources, dict, required, is_metadata – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
        • scheduler_stderr, str, optional, is_metadata – Filename to which the content of stderr of the scheduler is written.
        • scheduler_stdout, str, optional, is_metadata – Filename to which the content of stdout of the scheduler is written.
        • stash, Namespace – Optional directives to stash files after the calculation job has completed.
          • source_list, (tuple, list, NoneType), optional, is_metadata – Sequence of relative filepaths representing files in the remote directory that should be stashed.
          • stash_mode, (str, NoneType), optional, is_metadata – Mode with which to perform the stashing, should be value of aiida.common.datastructures.StashMode.
          • target_base, (str, NoneType), optional, is_metadata – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
        • submit_script_filename, str, optional, is_metadata – Filename to which the job submission script is written.
        • withmpi, bool, optional, is_metadata
      • store_provenance, bool, optional, is_metadata – If set to False provenance will not be stored in the database.
    • monitors, Namespace – Add monitoring functions that can inspect output files while the job is running and decide to prematurely terminate the job.
    • parameters, (Dict, NoneType), optional – Parameters for the namelists in the input file.
    • remote_folder, (RemoteData, NoneType), optional – Remote directory containing the results of an already completed calculation job without AiiDA. The inputs should be passed to the CalcJob as normal but instead of launching the actual job, the engine will recreate the input files and then proceed straight to the retrieve step where the files of this RemoteData will be retrieved as if it had been actually launched through AiiDA. If a parser is defined in the inputs, the results are parsed and attached as output nodes as usual.
    • settings, (Dict, NoneType), optional – Use an additional node for special settings
  • dry_run, (Bool, NoneType), optional – Terminate workchain steps before submitting calculations (test purposes only).
  • metadata, Namespace
    • call_link_label, str, optional, is_metadata – The label to use for the CALL link if the process is called by another process.
    • description, (str, NoneType), optional, is_metadata – Description to set on the process node.
    • label, (str, NoneType), optional, is_metadata – Label to set on the process node.
    • store_provenance, bool, optional, is_metadata – If set to False provenance will not be stored in the database.
  • nscf, Namespace – Inputs for the PwBaseWorkChain of the nscf calculation.
    • handler_overrides, (Dict, NoneType), optional – Mapping where keys are process handler names and the values are a dictionary, where each dictionary can define the enabled and priority key, which can be used to toggle the values set on the original process handler declaration.
    • kpoints, (KpointsData, NoneType), optional – An explicit k-points list or mesh. Either this or kpoints_distance has to be provided.
    • kpoints_distance, (Float, NoneType), optional – The minimum desired distance in 1/Å between k-points in reciprocal space. The explicit k-points will be generated automatically by a calculation function based on the input structure.
    • kpoints_force_parity, (Bool, NoneType), optional – Optional input when constructing the k-points based on a desired kpoints_distance. Setting this to True will force the k-point mesh to have an even number of points along each lattice vector except for any non-periodic directions.
    • max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
    • metadata, Namespace
      • call_link_label, str, optional, is_metadata – The label to use for the CALL link if the process is called by another process.
      • description, (str, NoneType), optional, is_metadata – Description to set on the process node.
      • label, (str, NoneType), optional, is_metadata – Label to set on the process node.
      • store_provenance, bool, optional, is_metadata – If set to False provenance will not be stored in the database.
    • pw, Namespace
      • code, (AbstractCode, NoneType), optional – The Code to use for this job. This input is required, unless the remote_folder input is specified, which means an existing job is being imported and no code will actually be run.
      • hubbard_file, (SinglefileData, NoneType), optional – SinglefileData node containing the output Hubbard parameters from a HpCalculation
      • metadata, Namespace
        • call_link_label, str, optional, is_metadata – The label to use for the CALL link if the process is called by another process.
        • computer, (Computer, NoneType), optional, is_metadata – When using a “local” code, set the computer on which the calculation should be run.
        • description, (str, NoneType), optional, is_metadata – Description to set on the process node.
        • dry_run, bool, optional, is_metadata – When set to True will prepare the calculation job for submission but not actually launch it.
        • label, (str, NoneType), optional, is_metadata – Label to set on the process node.
        • options, Namespace
          • account, (str, NoneType), optional, is_metadata – Set the account to use in for the queue on the remote computer
          • additional_retrieve_list, (list, tuple, NoneType), optional, is_metadata – List of relative file paths that should be retrieved in addition to what the plugin specifies.
          • append_text, str, optional, is_metadata – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
          • custom_scheduler_commands, str, optional, is_metadata – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
          • environment_variables, dict, optional, is_metadata – Set a dictionary of custom environment variables for this calculation
          • environment_variables_double_quotes, bool, optional, is_metadata – If set to True, use double quotes instead of single quotes to escape the environment variables specified in environment_variables.
          • import_sys_environment, bool, optional, is_metadata – If set to true, the submission script will load the system environment variables
          • input_filename, str, optional, is_metadata
          • max_memory_kb, (int, NoneType), optional, is_metadata – Set the maximum memory (in KiloBytes) to be asked to the scheduler
          • max_wallclock_seconds, (int, NoneType), optional, is_metadata – Set the wallclock in seconds asked to the scheduler
          • mpirun_extra_params, (list, tuple), optional, is_metadata – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
          • output_filename, str, optional, is_metadata
          • parser_name, str, optional, is_metadata
          • prepend_text, str, optional, is_metadata – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
          • priority, (str, NoneType), optional, is_metadata – Set the priority of the job to be queued
          • qos, (str, NoneType), optional, is_metadata – Set the quality of service to use in for the queue on the remote computer
          • queue_name, (str, NoneType), optional, is_metadata – Set the name of the queue on the remote computer
          • rerunnable, (bool, NoneType), optional, is_metadata – Determines if the calculation can be requeued / rerun.
          • resources, dict, required, is_metadata – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
          • scheduler_stderr, str, optional, is_metadata – Filename to which the content of stderr of the scheduler is written.
          • scheduler_stdout, str, optional, is_metadata – Filename to which the content of stdout of the scheduler is written.
          • stash, Namespace – Optional directives to stash files after the calculation job has completed.
            • source_list, (tuple, list, NoneType), optional, is_metadata – Sequence of relative filepaths representing files in the remote directory that should be stashed.
            • stash_mode, (str, NoneType), optional, is_metadata – Mode with which to perform the stashing, should be value of aiida.common.datastructures.StashMode.
            • target_base, (str, NoneType), optional, is_metadata – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
          • submit_script_filename, str, optional, is_metadata – Filename to which the job submission script is written.
          • withmpi, bool, optional, is_metadata
          • without_xml, (bool, NoneType), optional, is_metadata – If set to True the parser will not fail if the XML file is missing in the retrieved folder.
        • store_provenance, bool, optional, is_metadata – If set to False provenance will not be stored in the database.
      • monitors, Namespace – Add monitoring functions that can inspect output files while the job is running and decide to prematurely terminate the job.
      • parallelization, (Dict, NoneType), optional – Parallelization options. The following flags are allowed: npool : The number of ‘pools’, each taking care of a group of k-points. nband : The number of ‘band groups’, each taking care of a group of Kohn-Sham orbitals. ntg : The number of ‘task groups’ across which the FFT planes are distributed. ndiag : The number of ‘linear algebra groups’ used when parallelizing the subspace diagonalization / iterative orthonormalization. By default, no parameter is passed to Quantum ESPRESSO, meaning it will use its default.
      • parameters, Dict, required – The input parameters that are to be used to construct the input file.
      • pseudos, Namespace – A mapping of UpfData nodes onto the kind name to which they should apply.
      • remote_folder, (RemoteData, NoneType), optional – Remote directory containing the results of an already completed calculation job without AiiDA. The inputs should be passed to the CalcJob as normal but instead of launching the actual job, the engine will recreate the input files and then proceed straight to the retrieve step where the files of this RemoteData will be retrieved as if it had been actually launched through AiiDA. If a parser is defined in the inputs, the results are parsed and attached as output nodes as usual.
      • settings, (Dict, NoneType), optional – Optional parameters to affect the way the calculation job and the parsing are performed.
      • vdw_table, (SinglefileData, NoneType), optional – Optional van der Waals table contained in a SinglefileData.
  • projwfc, Namespace – Input parameters for the projwfc.x calculation. Note that the Emin, Emax and DeltaE values have to match with those in the dos inputs.
    • code, (AbstractCode, NoneType), optional – The Code to use for this job. This input is required, unless the remote_folder input is specified, which means an existing job is being imported and no code will actually be run.
    • metadata, Namespace
      • call_link_label, str, optional, is_metadata – The label to use for the CALL link if the process is called by another process.
      • computer, (Computer, NoneType), optional, is_metadata – When using a “local” code, set the computer on which the calculation should be run.
      • description, (str, NoneType), optional, is_metadata – Description to set on the process node.
      • dry_run, bool, optional, is_metadata – When set to True will prepare the calculation job for submission but not actually launch it.
      • label, (str, NoneType), optional, is_metadata – Label to set on the process node.
      • options, Namespace
        • account, (str, NoneType), optional, is_metadata – Set the account to use in for the queue on the remote computer
        • additional_retrieve_list, (list, tuple, NoneType), optional, is_metadata – List of relative file paths that should be retrieved in addition to what the plugin specifies.
        • append_text, str, optional, is_metadata – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
        • custom_scheduler_commands, str, optional, is_metadata – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
        • environment_variables, dict, optional, is_metadata – Set a dictionary of custom environment variables for this calculation
        • environment_variables_double_quotes, bool, optional, is_metadata – If set to True, use double quotes instead of single quotes to escape the environment variables specified in environment_variables.
        • import_sys_environment, bool, optional, is_metadata – If set to true, the submission script will load the system environment variables
        • input_filename, str, optional, is_metadata
        • max_memory_kb, (int, NoneType), optional, is_metadata – Set the maximum memory (in KiloBytes) to be asked to the scheduler
        • max_wallclock_seconds, (int, NoneType), optional, is_metadata – Set the wallclock in seconds asked to the scheduler
        • mpirun_extra_params, (list, tuple), optional, is_metadata – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
        • output_filename, str, optional, is_metadata
        • parser_name, str, optional, is_metadata
        • prepend_text, str, optional, is_metadata – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
        • priority, (str, NoneType), optional, is_metadata – Set the priority of the job to be queued
        • qos, (str, NoneType), optional, is_metadata – Set the quality of service to use in for the queue on the remote computer
        • queue_name, (str, NoneType), optional, is_metadata – Set the name of the queue on the remote computer
        • rerunnable, (bool, NoneType), optional, is_metadata – Determines if the calculation can be requeued / rerun.
        • resources, dict, required, is_metadata – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
        • scheduler_stderr, str, optional, is_metadata – Filename to which the content of stderr of the scheduler is written.
        • scheduler_stdout, str, optional, is_metadata – Filename to which the content of stdout of the scheduler is written.
        • stash, Namespace – Optional directives to stash files after the calculation job has completed.
          • source_list, (tuple, list, NoneType), optional, is_metadata – Sequence of relative filepaths representing files in the remote directory that should be stashed.
          • stash_mode, (str, NoneType), optional, is_metadata – Mode with which to perform the stashing, should be value of aiida.common.datastructures.StashMode.
          • target_base, (str, NoneType), optional, is_metadata – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
        • submit_script_filename, str, optional, is_metadata – Filename to which the job submission script is written.
        • withmpi, bool, optional, is_metadata
      • store_provenance, bool, optional, is_metadata – If set to False provenance will not be stored in the database.
    • monitors, Namespace – Add monitoring functions that can inspect output files while the job is running and decide to prematurely terminate the job.
    • parameters, (Dict, NoneType), optional – Parameters for the namelists in the input file.
    • remote_folder, (RemoteData, NoneType), optional – Remote directory containing the results of an already completed calculation job without AiiDA. The inputs should be passed to the CalcJob as normal but instead of launching the actual job, the engine will recreate the input files and then proceed straight to the retrieve step where the files of this RemoteData will be retrieved as if it had been actually launched through AiiDA. If a parser is defined in the inputs, the results are parsed and attached as output nodes as usual.
    • settings, (Dict, NoneType), optional – Use an additional node for special settings
  • scf, Namespace – Inputs for the PwBaseWorkChain of the scf calculation.
    • handler_overrides, (Dict, NoneType), optional – Mapping where keys are process handler names and the values are a dictionary, where each dictionary can define the enabled and priority key, which can be used to toggle the values set on the original process handler declaration.
    • kpoints, (KpointsData, NoneType), optional – An explicit k-points list or mesh. Either this or kpoints_distance has to be provided.
    • kpoints_distance, (Float, NoneType), optional – The minimum desired distance in 1/Å between k-points in reciprocal space. The explicit k-points will be generated automatically by a calculation function based on the input structure.
    • kpoints_force_parity, (Bool, NoneType), optional – Optional input when constructing the k-points based on a desired kpoints_distance. Setting this to True will force the k-point mesh to have an even number of points along each lattice vector except for any non-periodic directions.
    • max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
    • metadata, Namespace
      • call_link_label, str, optional, is_metadata – The label to use for the CALL link if the process is called by another process.
      • description, (str, NoneType), optional, is_metadata – Description to set on the process node.
      • label, (str, NoneType), optional, is_metadata – Label to set on the process node.
      • store_provenance, bool, optional, is_metadata – If set to False provenance will not be stored in the database.
    • pw, Namespace
      • code, (AbstractCode, NoneType), optional – The Code to use for this job. This input is required, unless the remote_folder input is specified, which means an existing job is being imported and no code will actually be run.
      • hubbard_file, (SinglefileData, NoneType), optional – SinglefileData node containing the output Hubbard parameters from a HpCalculation
      • metadata, Namespace
        • call_link_label, str, optional, is_metadata – The label to use for the CALL link if the process is called by another process.
        • computer, (Computer, NoneType), optional, is_metadata – When using a “local” code, set the computer on which the calculation should be run.
        • description, (str, NoneType), optional, is_metadata – Description to set on the process node.
        • dry_run, bool, optional, is_metadata – When set to True will prepare the calculation job for submission but not actually launch it.
        • label, (str, NoneType), optional, is_metadata – Label to set on the process node.
        • options, Namespace
          • account, (str, NoneType), optional, is_metadata – Set the account to use in for the queue on the remote computer
          • additional_retrieve_list, (list, tuple, NoneType), optional, is_metadata – List of relative file paths that should be retrieved in addition to what the plugin specifies.
          • append_text, str, optional, is_metadata – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
          • custom_scheduler_commands, str, optional, is_metadata – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
          • environment_variables, dict, optional, is_metadata – Set a dictionary of custom environment variables for this calculation
          • environment_variables_double_quotes, bool, optional, is_metadata – If set to True, use double quotes instead of single quotes to escape the environment variables specified in environment_variables.
          • import_sys_environment, bool, optional, is_metadata – If set to true, the submission script will load the system environment variables
          • input_filename, str, optional, is_metadata
          • max_memory_kb, (int, NoneType), optional, is_metadata – Set the maximum memory (in KiloBytes) to be asked to the scheduler
          • max_wallclock_seconds, (int, NoneType), optional, is_metadata – Set the wallclock in seconds asked to the scheduler
          • mpirun_extra_params, (list, tuple), optional, is_metadata – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
          • output_filename, str, optional, is_metadata
          • parser_name, str, optional, is_metadata
          • prepend_text, str, optional, is_metadata – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
          • priority, (str, NoneType), optional, is_metadata – Set the priority of the job to be queued
          • qos, (str, NoneType), optional, is_metadata – Set the quality of service to use in for the queue on the remote computer
          • queue_name, (str, NoneType), optional, is_metadata – Set the name of the queue on the remote computer
          • rerunnable, (bool, NoneType), optional, is_metadata – Determines if the calculation can be requeued / rerun.
          • resources, dict, required, is_metadata – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
          • scheduler_stderr, str, optional, is_metadata – Filename to which the content of stderr of the scheduler is written.
          • scheduler_stdout, str, optional, is_metadata – Filename to which the content of stdout of the scheduler is written.
          • stash, Namespace – Optional directives to stash files after the calculation job has completed.
            • source_list, (tuple, list, NoneType), optional, is_metadata – Sequence of relative filepaths representing files in the remote directory that should be stashed.
            • stash_mode, (str, NoneType), optional, is_metadata – Mode with which to perform the stashing, should be value of aiida.common.datastructures.StashMode.
            • target_base, (str, NoneType), optional, is_metadata – The base location to where the files should be stashd. For example, for the copy stash mode, this should be an absolute filepath on the remote computer.
          • submit_script_filename, str, optional, is_metadata – Filename to which the job submission script is written.
          • withmpi, bool, optional, is_metadata
          • without_xml, (bool, NoneType), optional, is_metadata – If set to True the parser will not fail if the XML file is missing in the retrieved folder.
        • store_provenance, bool, optional, is_metadata – If set to False provenance will not be stored in the database.
      • monitors, Namespace – Add monitoring functions that can inspect output files while the job is running and decide to prematurely terminate the job.
      • parallelization, (Dict, NoneType), optional – Parallelization options. The following flags are allowed: npool : The number of ‘pools’, each taking care of a group of k-points. nband : The number of ‘band groups’, each taking care of a group of Kohn-Sham orbitals. ntg : The number of ‘task groups’ across which the FFT planes are distributed. ndiag : The number of ‘linear algebra groups’ used when parallelizing the subspace diagonalization / iterative orthonormalization. By default, no parameter is passed to Quantum ESPRESSO, meaning it will use its default.
      • parameters, Dict, required – The input parameters that are to be used to construct the input file.
      • pseudos, Namespace – A mapping of UpfData nodes onto the kind name to which they should apply.
      • remote_folder, (RemoteData, NoneType), optional – Remote directory containing the results of an already completed calculation job without AiiDA. The inputs should be passed to the CalcJob as normal but instead of launching the actual job, the engine will recreate the input files and then proceed straight to the retrieve step where the files of this RemoteData will be retrieved as if it had been actually launched through AiiDA. If a parser is defined in the inputs, the results are parsed and attached as output nodes as usual.
      • settings, (Dict, NoneType), optional – Optional parameters to affect the way the calculation job and the parsing are performed.
      • vdw_table, (SinglefileData, NoneType), optional – Optional van der Waals table contained in a SinglefileData.
  • serial_clean, (Bool, NoneType), optional – If True, calculations will be run in serial, and work directories will be cleaned before the next step.
  • structure, StructureData, required – The input structure.

Outputs:

  • dos, Namespace
    • output_dos, XyData, required
    • output_parameters, Dict, required
    • remote_folder, RemoteData, required – Input files necessary to run the process will be stored in this folder node.
    • remote_stash, RemoteStashData, optional – Contents of the stash.source_list option are stored in this remote folder after job completion.
    • retrieved, FolderData, required – Files that are retrieved by the daemon will be stored in this node. By default the stdout and stderr of the scheduler will be added, but one can add more by specifying them in CalcInfo.retrieve_list.
  • nscf, Namespace
    • output_atomic_occupations, Dict, optional
    • output_band, BandsData, optional – The output_band output node of the successful calculation if present.
    • output_kpoints, KpointsData, optional
    • output_parameters, Dict, required – The output_parameters output node of the successful calculation.
    • output_structure, StructureData, optional – The output_structure output node of the successful calculation if present.
    • output_trajectory, TrajectoryData, optional
    • remote_folder, RemoteData, required – Input files necessary to run the process will be stored in this folder node.
    • remote_stash, RemoteStashData, optional – Contents of the stash.source_list option are stored in this remote folder after job completion.
    • retrieved, FolderData, required – Files that are retrieved by the daemon will be stored in this node. By default the stdout and stderr of the scheduler will be added, but one can add more by specifying them in CalcInfo.retrieve_list.
  • projwfc, Namespace
    • Dos, XyData, required
    • bands, BandsData, optional
    • bands_down, BandsData, optional
    • bands_up, BandsData, optional
    • output_parameters, Dict, required
    • projections, ProjectionData, optional
    • projections_down, ProjectionData, optional
    • projections_up, ProjectionData, optional
    • remote_folder, RemoteData, required – Input files necessary to run the process will be stored in this folder node.
    • remote_stash, RemoteStashData, optional – Contents of the stash.source_list option are stored in this remote folder after job completion.
    • retrieved, FolderData, required – Files that are retrieved by the daemon will be stored in this node. By default the stdout and stderr of the scheduler will be added, but one can add more by specifying them in CalcInfo.retrieve_list.

Outline:

setup(Initialize context variables that are used during the logical flow of the workchain.)
if(should_run_scf)
    run_scf(Run an SCF calculation, to generate the wavefunction.)
    inspect_scf(Verify that the SCF calculation finished successfully.)
run_nscf(Run an NSCF calculation, to generate eigenvalues with a denser k-point mesh. This calculation modifies the base scf calculation inputs by: - Using the parent folder from the scf calculation. - Replacing the kpoints, if an alternative is specified for nscf. - Changing ``SYSTEM.occupations`` to 'tetrahedra'. - Changing ``SYSTEM.nosym`` to True, to avoid generation of additional k-points in low symmetry cases. - Replace the ``pw.metadata.options``, if an alternative is specified for nscf.)
inspect_nscf(Verify that the NSCF calculation finished successfully.)
if(serial_clean)
    run_dos_serial(Run DOS calculation.)
    inspect_dos_serial(Verify that the DOS calculation finished successfully, then clean its remote directory.)
    run_projwfc_serial(Run Projwfc calculation.)
    inspect_projwfc_serial(Verify that the Projwfc calculation finished successfully, then clean its remote directory.)
elif(<lambda>)
    run_pdos_parallel(Run DOS and Projwfc calculations in parallel.)
    inspect_pdos_parallel(Verify that the DOS and Projwfc calculations finished successfully.)
results(Attach the desired output nodes directly as outputs of the workchain.)