MPI related modules¶
parallel.f90¶
Description
This module contains the blocking information
Quick access
- Types:
- Variables:
chunksize,ierr,n_procs,nr_per_rank,nthreads,rank_bn,rank_with_l1m0,rank_with_r_lcr- Routines:
check_mpi_error(),get_openmp_blocks(),getblocks(),mpiio_setup(),parallel()
Needed modules
mpiomp_lib
Types
- type parallel_mod/load¶
- Type fields:
% n_per_rank [integer ]
% nstart [integer ]
% nstop [integer ]
Variables
- parallel_mod/chunksize [integer]¶
- parallel_mod/ierr [integer]¶
- parallel_mod/n_procs [integer]¶
- parallel_mod/nr_per_rank [integer]¶
- parallel_mod/nthreads [integer]¶
- parallel_mod/rank_bn [integer]¶
- parallel_mod/rank_with_l1m0 [integer]¶
- parallel_mod/rank_with_r_lcr [integer]¶
Subroutines and functions
- subroutine parallel_mod/check_mpi_error(code)¶
- Parameters:
code [integer ,in]
- subroutine parallel_mod/getblocks(bal, n_points, n_procs)¶
- Parameters:
bal (1 +) [load ,inout]
n_points [integer ,in]
n_procs [integer ,in]
- Called from:
initialize_blocking(),initphi(),initialize_geos(),initialize_outmisc_mod(),initialize_radial_data(),readstartfields_mpi()
- subroutine parallel_mod/get_openmp_blocks(nstart, nstop)¶
- Parameters:
nstart [integer ,inout]
nstop [integer ,inout]
- Called from:
get_dtblmfinish(),fft_many(),ifft_many(),native_qst_to_spat(),native_sphtor_to_spat(),native_sph_to_spat(),native_sph_to_grad_spat(),native_spat_to_sph(),native_spat_to_sph_tor(),prepareb_fd(),fill_ghosts_b(),updateb_fd(),finish_exp_mag(),finish_exp_mag_rdist(),get_mag_ic_rhs_imp(),assemble_mag_rloc(),get_mag_rhs_imp(),get_mag_rhs_imp_ghost(),preparephase_fd(),fill_ghosts_phi(),updatephase_fd(),get_phase_rhs_imp(),get_phase_rhs_imp_ghost(),assemble_phase_rloc(),prepares_fd(),fill_ghosts_s(),updates_fd(),finish_exp_entropy(),finish_exp_entropy_rdist(),get_entropy_rhs_imp(),get_entropy_rhs_imp_ghost(),assemble_entropy_rloc(),preparew_fd(),fill_ghosts_w(),updatew_fd(),finish_exp_pol(),finish_exp_pol_rdist(),get_pol_rhs_imp(),get_pol_rhs_imp_ghost(),assemble_pol(),assemble_pol_rloc(),finish_exp_smat(),finish_exp_smat_rdist(),assemble_single(),get_single_rhs_imp(),preparexi_fd(),fill_ghosts_xi(),updatexi_fd(),finish_exp_comp(),finish_exp_comp_rdist(),get_comp_rhs_imp(),get_comp_rhs_imp_ghost(),assemble_comp_rloc(),preparez_fd(),fill_ghosts_z(),updatez_fd(),get_tor_rhs_imp(),get_tor_rhs_imp_ghost(),assemble_tor_rloc()
- subroutine parallel_mod/mpiio_setup(info)¶
This routine set ups the default MPI-IO configuration. This is based on recommandations from IDRIS “Best practices for parallel IO and MPI-IO hints”
- Parameters:
info [integer ,out]
- Called from:
write_pot_mpi(),open_graph_file(),readstartfields_mpi(),store_mpi()
radial_data.f90¶
Description
This module defines the MPI decomposition in the radial direction.
Quick access
- Variables:
n_r_cmb,n_r_icb,nrstart,nrstartmag,nrstop,nrstopmag,radial_balance- Routines:
Needed modules
iso_fortran_env(output_unit())parallel_mod(rank(),n_procs(),nr_per_rank(),load(),getblocks()): This module contains the blocking informationlogic(l_mag(),lverbose()): Module containing the logicals that control the run
Variables
- radial_data/n_r_cmb [integer,public/protected]¶
- radial_data/n_r_icb [integer,public/protected]¶
- radial_data/nrstart [integer,public/protected]¶
- radial_data/nrstartmag [integer,public/protected]¶
- radial_data/nrstop [integer,public/protected]¶
- radial_data/nrstopmag [integer,public/protected]¶
Subroutines and functions
- subroutine radial_data/initialize_radial_data(n_r_max)¶
This subroutine is used to set up the MPI decomposition in the radial direction
- Parameters:
n_r_max [integer ,in] :: Number of radial grid points
- Called from:
- Call to:
communications.f90¶
Description
This module contains the different MPI communicators used in MagIC.
Quick access
- Types:
- Variables:
create_gather_type,destroy_gather_type,find_faster_block,find_faster_comm,get_global_sum,get_global_sum_cmplx_1d,get_global_sum_cmplx_2d,get_global_sum_real_2d,gt_cheb,gt_ic,gt_oc,lm2lo_redist,lo2lm_redist,reduce_radial,reduce_radial_1d,reduce_radial_2d,send_lm_pair_to_master,send_lm_pair_to_master_arr,send_lm_pair_to_master_scal_cmplx,send_lm_pair_to_master_scal_real,send_scal_lm_to_master,temp_gather_lo- Routines:
allgather_from_rloc(),finalize_communications(),gather_all_from_lo_to_rank0(),gather_from_lo_to_rank0(),gather_from_rloc(),initialize_communications(),myallgather(),reduce_scalar(),scatter_from_rank0_to_lo(),transp_phi2r(),transp_r2phi()
Needed modules
mpimodconstants(zero()): module containing constants and parameters used in the code.precision_mod: This module controls the precision used in MagICmem_alloc(memwrite(),bytes_allocated()): This little module is used to estimate the global memory allocation used in MagICparallel_mod(rank(),n_procs(),ierr(),load()): This module contains the blocking informationtruncation(l_max(),lm_max(),minc(),n_r_max(),n_r_ic_max(),fd_order(),fd_order_bound(),m_max(),m_min()): This module defines the grid points and the truncationblocking(st_map(),lo_map(),lm_balance(),llm(),ulm()): Module containing blocking informationradial_data(nrstart(),nrstop(),radial_balance()): This module defines the MPI decomposition in the radial direction.logic(l_mag(),l_conv(),l_heat(),l_chemical_conv(),l_finite_diff(),l_mag_kin(),l_double_curl(),l_save_out(),l_packed_transp(),l_parallel_solve(),l_mag_par_solve()): Module containing the logicals that control the runuseful(abortrun()): This module contains several useful routines.output_data(n_log_file(),log_file()): This module contains the parameters for output controliso_fortran_env(output_unit())mpi_ptop_mod(type_mpiptop()): This module contains the implementation of MPI_Isend/MPI_Irecv global transposempi_alltoall_mod(type_mpiatoav(),type_mpiatoaw(),type_mpiatoap()): This module contains the implementation of all-to-all global communicatorscharmanip(capitalize()): This module contains several useful routines to manipule character stringsnum_param(mpi_transp(),mpi_packing()): Module containing numerical and control parametersmpi_transp_mod(type_mpitransp()): This is an abstract class that will be used to define MPI transposers The actual implementation is deferred to either point-to-point (MPI_Isend and MPI_IRecv) communications or all-to-all (MPI_AlltoAll)
Types
- type communications/gather_type¶
- Type fields:
% dim2 [integer ]
% gather_mpi_type (*) [integer ,allocatable]
Variables
- communications/create_gather_type [private]¶
- communications/destroy_gather_type [private]¶
- communications/find_faster_block [private]¶
- communications/find_faster_comm [private]¶
- communications/get_global_sum [public]¶
- communications/get_global_sum_cmplx_1d [private]¶
- communications/get_global_sum_cmplx_2d [private]¶
- communications/get_global_sum_real_2d [private]¶
- communications/gt_cheb [gather_type,public]¶
- communications/gt_ic [gather_type,public]¶
- communications/gt_oc [gather_type,public]¶
- communications/lm2lo_redist [private]¶
- communications/lo2lm_redist [private]¶
- communications/reduce_radial [public]¶
- communications/reduce_radial_1d [private]¶
- communications/reduce_radial_2d [private]¶
- communications/send_lm_pair_to_master [public]¶
- communications/send_lm_pair_to_master_arr [private]¶
- communications/send_lm_pair_to_master_scal_cmplx [private]¶
- communications/send_lm_pair_to_master_scal_real [private]¶
- communications/send_scal_lm_to_master [private]¶
- communications/temp_gather_lo (*) [complex,private/allocatable]¶
Subroutines and functions
- subroutine communications/initialize_communications()¶
- Called from:
- Call to:
- subroutine communications/gather_all_from_lo_to_rank0(self, arr_lo, arr_full)¶
- Parameters:
- Called from:
fields_average(),write_pot_mpi(),write_pot(),output(),store()
- subroutine communications/gather_from_lo_to_rank0(arr_lo, arr_full)¶
- Parameters:
- Called from:
fields_average(),get_e_mag(),get_onset(),write_bcmb(),calc_dtb_frame_ic(),output(),store_mpi()
- subroutine communications/scatter_from_rank0_to_lo(arr_full, arr_lo)¶
This subroutine scatters a complex input array of size lm_max and re-aranges the (l,m) pairs using the local mapping.
- Parameters:
- Called from:
readstartfields_old(),readstartfields(),readstartfields_mpi(),step_time()
- subroutine communications/gather_from_rloc(arr_rloc, arr_glob, irank)¶
This subroutine gather a r-distributed array on rank=irank
- Parameters:
- Called from:
outhemi(),outhelicity(),outphase(),outpar(),outperppar(),get_power()
- subroutine communications/allgather_from_rloc(arr_rloc, arr_glob)¶
This subroutine gather a r-distributed array
- Parameters:
- Called from:
- subroutine communications/reduce_scalar(scal_dist, scal_glob, irank)¶
- Parameters:
scal_dist [real ,in]
scal_glob [real ,inout]
irank [integer ,in]
- Called from:
- subroutine communications/transp_r2phi(arr_rloc, arr_ploc, phi_balance, npstart, npstop)¶
This subroutine is used to compute a MPI transpose between a R-distributed array and a Phi-distributed array
- Parameters:
arr_rloc (*,*,1 + - nrstart) [real ,in] :: Input array (R-distributed)
arr_ploc (*,1 + - npstart,*) [real ,out] :: Output array (Phi-distributed)
phi_balance (1 +) [load ,in] :: Balancing info along phi
npstart [integer ,in] :: First index for phi-distributed arrays
npstop [integer ,in] :: Last index for phi-distributed arrays
- Called from:
- subroutine communications/transp_phi2r(arr_ploc, arr_rloc, phi_balance, npstart, npstop)¶
This subroutine is used to compute a MPI transpose between a Phi-distributed array and a R-distributed array
- Parameters:
arr_ploc (*,1 + - npstart,*) [real ,in] :: Input array (Phi-distributed)
arr_rloc (*,*,1 + - nrstart) [real ,out] :: Output array (R-distributed)
phi_balance (1 +) [load ,in] :: Balancing info along phi
npstart [integer ,in,optional/default=(-1 - + shape(arr_ploc, 1)) / (-1)] :: First index for phi-distributed arrays
npstop [integer ,in] :: Last index for phi-distributed arrays
- Called from:
- subroutine communications/myallgather(arr, dim1, dim2)¶
- Parameters:
arr (dim1,dim2) [complex ,inout]
dim1 [integer ,in,]
dim2 [integer ,in,]
- Use :
mpi_transpose.f90¶
Description
This is an abstract class that will be used to define MPI transposers The actual implementation is deferred to either point-to-point (MPI_Isend and MPI_IRecv) communications or all-to-all (MPI_AlltoAll)
Quick access
- Types:
Needed modules
precision_mod: This module controls the precision used in MagICtruncation(lm_max(),n_r_max()): This module defines the grid points and the truncationradial_data(nrstart(),nrstop()): This module defines the MPI decomposition in the radial direction.blocking(llm(),ulm()): Module containing blocking information
Types
- type mpi_transp_mod/type_mpitransp¶
- Type fields:
% n_fields [integer ]
Variables
- mpi_transp_mod/unknown_interface [private]¶
parallel_solvers.f90¶
Description
This module contains the routines that are used to solve linear banded problems with R-distributed arrays.
Quick access
- Types:
- Variables:
finalize_3,finalize_5,initialize_3,initialize_5,prepare_mat_3,prepare_mat_5,solver_dn_3,solver_dn_5,solver_finish_3,solver_finish_5,solver_single,solver_up_3,solver_up_5
Needed modules
precision_mod: This module controls the precision used in MagICparallel_mod: This module contains the blocking informationradial_data(n_r_cmb(),n_r_icb()): This module defines the MPI decomposition in the radial direction.mem_alloc(bytes_allocated()): This little module is used to estimate the global memory allocation used in MagICconstants(one()): module containing constants and parameters used in the code.truncation(lm_max()): This module defines the grid points and the truncation
Types
- type parallel_solvers/type_tri_par¶
- Type fields:
% diag (*,*) [real ,allocatable]
% lmax [integer ]
% lmin [integer ]
% low (*,*) [real ,allocatable]
% nrmax [integer ]
% nrmin [integer ]
% up (*,*) [real ,allocatable]
- type parallel_solvers/type_penta_par¶
- Type fields:
% diag (*,*) [real ,allocatable]
% lmax [integer ]
% lmin [integer ]
% low1 (*,*) [real ,allocatable]
% low2 (*,*) [real ,allocatable]
% nrmax [integer ]
% nrmin [integer ]
% up1 (*,*) [real ,allocatable]
% up2 (*,*) [real ,allocatable]
Variables
- parallel_solvers/finalize_3 [private]¶
- parallel_solvers/finalize_5 [private]¶
- parallel_solvers/initialize_3 [private]¶
- parallel_solvers/initialize_5 [private]¶
- parallel_solvers/prepare_mat_3 [private]¶
- parallel_solvers/prepare_mat_5 [private]¶
- parallel_solvers/solver_dn_3 [private]¶
- parallel_solvers/solver_dn_5 [private]¶
- parallel_solvers/solver_finish_3 [private]¶
- parallel_solvers/solver_finish_5 [private]¶
- parallel_solvers/solver_single [private]¶
Used for one single right hand side
- parallel_solvers/solver_up_3 [private]¶
- parallel_solvers/solver_up_5 [private]¶