• Home
  • Get it/Run it
  • Contribute!
  • Numerical methods
  • Contents
magic

Navigation

  • index
  • fortran modules |
  • modules |
  • next |
  • previous |
  • Magic 6.3 documentation »
  • Description of the Fortran modules »
  • MPI related modules

MPI related modules¶

parallel.f90¶

Description

This module contains the blocking information

Quick access

Types:

load

Variables:

chunksize, ierr, n_procs, nr_per_rank, nthreads, rank_bn, rank_with_l1m0, rank_with_r_lcr

Routines:

check_mpi_error(), get_openmp_blocks(), getblocks(), mpiio_setup(), parallel()

Needed modules

  • mpi

  • omp_lib

Types

  • type  parallel_mod/load¶
    Type fields:
    • % n_per_rank [integer ]

    • % nstart [integer ]

    • % nstop [integer ]

Variables

  • parallel_mod/chunksize [integer]¶
  • parallel_mod/ierr [integer]¶
  • parallel_mod/n_procs [integer]¶
  • parallel_mod/nr_per_rank [integer]¶
  • parallel_mod/nthreads [integer]¶
  • parallel_mod/rank_bn [integer]¶
  • parallel_mod/rank_with_l1m0 [integer]¶
  • parallel_mod/rank_with_r_lcr [integer]¶

Subroutines and functions

subroutine  parallel_mod/parallel()¶
Called from:

magic

subroutine  parallel_mod/check_mpi_error(code)¶
Parameters:

code [integer ,in]

subroutine  parallel_mod/getblocks(bal, n_points, n_procs)¶
Parameters:
  • bal (1 +) [load ,inout]

  • n_points [integer ,in]

  • n_procs [integer ,in]

Called from:

initialize_blocking(), initphi(), initialize_geos(), initialize_outmisc_mod(), initialize_radial_data(), readstartfields_mpi()

subroutine  parallel_mod/get_openmp_blocks(nstart, nstop)¶
Parameters:
  • nstart [integer ,inout]

  • nstop [integer ,inout]

Called from:

get_dtblmfinish(), fft_many(), ifft_many(), native_qst_to_spat(), native_sphtor_to_spat(), native_sph_to_spat(), native_sph_to_grad_spat(), native_spat_to_sph(), native_spat_to_sph_tor(), prepareb_fd(), fill_ghosts_b(), updateb_fd(), finish_exp_mag(), finish_exp_mag_rdist(), get_mag_ic_rhs_imp(), assemble_mag_rloc(), get_mag_rhs_imp(), get_mag_rhs_imp_ghost(), preparephase_fd(), fill_ghosts_phi(), updatephase_fd(), get_phase_rhs_imp(), get_phase_rhs_imp_ghost(), assemble_phase_rloc(), prepares_fd(), fill_ghosts_s(), updates_fd(), finish_exp_entropy(), finish_exp_entropy_rdist(), get_entropy_rhs_imp(), get_entropy_rhs_imp_ghost(), assemble_entropy_rloc(), preparew_fd(), fill_ghosts_w(), updatew_fd(), finish_exp_pol(), finish_exp_pol_rdist(), get_pol_rhs_imp(), get_pol_rhs_imp_ghost(), assemble_pol(), assemble_pol_rloc(), finish_exp_smat(), finish_exp_smat_rdist(), assemble_single(), get_single_rhs_imp(), preparexi_fd(), fill_ghosts_xi(), updatexi_fd(), finish_exp_comp(), finish_exp_comp_rdist(), get_comp_rhs_imp(), get_comp_rhs_imp_ghost(), assemble_comp_rloc(), preparez_fd(), fill_ghosts_z(), updatez_fd(), get_tor_rhs_imp(), get_tor_rhs_imp_ghost(), assemble_tor_rloc()

subroutine  parallel_mod/mpiio_setup(info)¶

This routine set ups the default MPI-IO configuration. This is based on recommandations from IDRIS “Best practices for parallel IO and MPI-IO hints”

Parameters:

info [integer ,out]

Called from:

write_pot_mpi(), open_graph_file(), readstartfields_mpi(), store_mpi()

radial_data.f90¶

Description

This module defines the MPI decomposition in the radial direction.

Quick access

Variables:

n_r_cmb, n_r_icb, nrstart, nrstartmag, nrstop, nrstopmag, radial_balance

Routines:

finalize_radial_data(), initialize_radial_data()

Needed modules

  • iso_fortran_env (output_unit())

  • parallel_mod (rank(), n_procs(), nr_per_rank(), load(), getblocks()): This module contains the blocking information

  • logic (l_mag(), lverbose()): Module containing the logicals that control the run

Variables

  • radial_data/n_r_cmb [integer,public/protected]¶
  • radial_data/n_r_icb [integer,public/protected]¶
  • radial_data/nrstart [integer,public/protected]¶
  • radial_data/nrstartmag [integer,public/protected]¶
  • radial_data/nrstop [integer,public/protected]¶
  • radial_data/nrstopmag [integer,public/protected]¶
  • radial_data/radial_balance (*) [load,allocatable/public]¶

Subroutines and functions

subroutine  radial_data/initialize_radial_data(n_r_max)¶

This subroutine is used to set up the MPI decomposition in the radial direction

Parameters:

n_r_max [integer ,in] :: Number of radial grid points

Called from:

magic

Call to:

getblocks()

subroutine  radial_data/finalize_radial_data()¶
Called from:

magic

communications.f90¶

Description

This module contains the different MPI communicators used in MagIC.

Quick access

Types:

gather_type

Variables:

create_gather_type, destroy_gather_type, find_faster_block, find_faster_comm, get_global_sum, get_global_sum_cmplx_1d, get_global_sum_cmplx_2d, get_global_sum_real_2d, gt_cheb, gt_ic, gt_oc, lm2lo_redist, lo2lm_redist, reduce_radial, reduce_radial_1d, reduce_radial_2d, send_lm_pair_to_master, send_lm_pair_to_master_arr, send_lm_pair_to_master_scal_cmplx, send_lm_pair_to_master_scal_real, send_scal_lm_to_master, temp_gather_lo

Routines:

allgather_from_rloc(), finalize_communications(), gather_all_from_lo_to_rank0(), gather_from_lo_to_rank0(), gather_from_rloc(), initialize_communications(), myallgather(), reduce_scalar(), scatter_from_rank0_to_lo(), transp_phi2r(), transp_r2phi()

Needed modules

  • mpimod

  • constants (zero()): module containing constants and parameters used in the code.

  • precision_mod: This module controls the precision used in MagIC

  • mem_alloc (memwrite(), bytes_allocated()): This little module is used to estimate the global memory allocation used in MagIC

  • parallel_mod (rank(), n_procs(), ierr(), load()): This module contains the blocking information

  • truncation (l_max(), lm_max(), minc(), n_r_max(), n_r_ic_max(), fd_order(), fd_order_bound(), m_max(), m_min()): This module defines the grid points and the truncation

  • blocking (st_map(), lo_map(), lm_balance(), llm(), ulm()): Module containing blocking information

  • radial_data (nrstart(), nrstop(), radial_balance()): This module defines the MPI decomposition in the radial direction.

  • logic (l_mag(), l_conv(), l_heat(), l_chemical_conv(), l_finite_diff(), l_mag_kin(), l_double_curl(), l_save_out(), l_packed_transp(), l_parallel_solve(), l_mag_par_solve()): Module containing the logicals that control the run

  • useful (abortrun()): This module contains several useful routines.

  • output_data (n_log_file(), log_file()): This module contains the parameters for output control

  • iso_fortran_env (output_unit())

  • mpi_ptop_mod (type_mpiptop()): This module contains the implementation of MPI_Isend/MPI_Irecv global transpose

  • mpi_alltoall_mod (type_mpiatoav(), type_mpiatoaw(), type_mpiatoap()): This module contains the implementation of all-to-all global communicators

  • charmanip (capitalize()): This module contains several useful routines to manipule character strings

  • num_param (mpi_transp(), mpi_packing()): Module containing numerical and control parameters

  • mpi_transp_mod (type_mpitransp()): This is an abstract class that will be used to define MPI transposers The actual implementation is deferred to either point-to-point (MPI_Isend and MPI_IRecv) communications or all-to-all (MPI_AlltoAll)

Types

  • type  communications/gather_type¶
    Type fields:
    • % dim2 [integer ]

    • % gather_mpi_type (*) [integer ,allocatable]

Variables

  • communications/create_gather_type [private]¶
  • communications/destroy_gather_type [private]¶
  • communications/find_faster_block [private]¶
  • communications/find_faster_comm [private]¶
  • communications/get_global_sum [public]¶
  • communications/get_global_sum_cmplx_1d [private]¶
  • communications/get_global_sum_cmplx_2d [private]¶
  • communications/get_global_sum_real_2d [private]¶
  • communications/gt_cheb [gather_type,public]¶
  • communications/gt_ic [gather_type,public]¶
  • communications/gt_oc [gather_type,public]¶
  • communications/lm2lo_redist [private]¶
  • communications/lo2lm_redist [private]¶
  • communications/reduce_radial [public]¶
  • communications/reduce_radial_1d [private]¶
  • communications/reduce_radial_2d [private]¶
  • communications/send_lm_pair_to_master [public]¶
  • communications/send_lm_pair_to_master_arr [private]¶
  • communications/send_lm_pair_to_master_scal_cmplx [private]¶
  • communications/send_lm_pair_to_master_scal_real [private]¶
  • communications/send_scal_lm_to_master [private]¶
  • communications/temp_gather_lo (*) [complex,private/allocatable]¶

Subroutines and functions

subroutine  communications/initialize_communications()¶
Called from:

magic

Call to:

capitalize(), memwrite()

subroutine  communications/finalize_communications()¶
Called from:

magic

subroutine  communications/gather_all_from_lo_to_rank0(self, arr_lo, arr_full)¶
Parameters:
  • self [gather_type ]

  • arr_lo (1 - llm + ulm,self%dim2) [complex ]

  • arr_full (lm_max,self%dim2) [complex ]

Called from:

fields_average(), write_pot_mpi(), write_pot(), output(), store()

subroutine  communications/gather_from_lo_to_rank0(arr_lo, arr_full)¶
Parameters:
  • arr_lo (1 - llm + ulm) [complex ]

  • arr_full (lm_max) [complex ]

Called from:

fields_average(), get_e_mag(), get_onset(), write_bcmb(), calc_dtb_frame_ic(), output(), store_mpi()

subroutine  communications/scatter_from_rank0_to_lo(arr_full, arr_lo)¶

This subroutine scatters a complex input array of size lm_max and re-aranges the (l,m) pairs using the local mapping.

Parameters:
  • arr_full (lm_max) [complex ,in]

  • arr_lo (1 - llm + ulm) [complex ,out]

Called from:

readstartfields_old(), readstartfields(), readstartfields_mpi(), step_time()

subroutine  communications/gather_from_rloc(arr_rloc, arr_glob, irank)¶

This subroutine gather a r-distributed array on rank=irank

Parameters:
  • arr_rloc (1 - nrstart + nrstop) [real ,in]

  • arr_glob (n_r_max) [real ,out]

  • irank [integer ,in]

Called from:

outhemi(), outhelicity(), outphase(), outpar(), outperppar(), get_power()

subroutine  communications/allgather_from_rloc(arr_rloc, arr_glob)¶

This subroutine gather a r-distributed array

Parameters:
  • arr_rloc (1 - nrstart + nrstop) [real ,in]

  • arr_glob (n_r_max) [real ,out]

Called from:

get_angular_moment_rloc()

subroutine  communications/reduce_scalar(scal_dist, scal_glob, irank)¶
Parameters:
  • scal_dist [real ,in]

  • scal_glob [real ,inout]

  • irank [integer ,in]

Called from:

get_e_mag()

subroutine  communications/transp_r2phi(arr_rloc, arr_ploc, phi_balance, npstart, npstop)¶

This subroutine is used to compute a MPI transpose between a R-distributed array and a Phi-distributed array

Parameters:
  • arr_rloc (*,*,1 + - nrstart) [real ,in] :: Input array (R-distributed)

  • arr_ploc (*,1 + - npstart,*) [real ,out] :: Output array (Phi-distributed)

  • phi_balance (1 +) [load ,in] :: Balancing info along phi

  • npstart [integer ,in] :: First index for phi-distributed arrays

  • npstop [integer ,in] :: Last index for phi-distributed arrays

Called from:

initphi(), outgeos(), calc_geos_frame(), outphase()

subroutine  communications/transp_phi2r(arr_ploc, arr_rloc, phi_balance, npstart, npstop)¶

This subroutine is used to compute a MPI transpose between a Phi-distributed array and a R-distributed array

Parameters:
  • arr_ploc (*,1 + - npstart,*) [real ,in] :: Input array (Phi-distributed)

  • arr_rloc (*,*,1 + - nrstart) [real ,out] :: Output array (R-distributed)

  • phi_balance (1 +) [load ,in] :: Balancing info along phi

  • npstart [integer ,in,optional/default=(-1 - + shape(arr_ploc, 1)) / (-1)] :: First index for phi-distributed arrays

  • npstop [integer ,in] :: Last index for phi-distributed arrays

Called from:

initphi()

subroutine  communications/myallgather(arr, dim1, dim2)¶
Parameters:
  • arr (dim1,dim2) [complex ,inout]

  • dim1 [integer ,in,]

  • dim2 [integer ,in,]

Use :

blocking, parallel_mod

mpi_transpose.f90¶

Description

This is an abstract class that will be used to define MPI transposers The actual implementation is deferred to either point-to-point (MPI_Isend and MPI_IRecv) communications or all-to-all (MPI_AlltoAll)

Quick access

Types:

type_mpitransp

Needed modules

  • precision_mod: This module controls the precision used in MagIC

  • truncation (lm_max(), n_r_max()): This module defines the grid points and the truncation

  • radial_data (nrstart(), nrstop()): This module defines the MPI decomposition in the radial direction.

  • blocking (llm(), ulm()): Module containing blocking information

Types

  • type  mpi_transp_mod/type_mpitransp¶
    Type fields:
    • % n_fields [integer ]

Variables

  • mpi_transp_mod/unknown_interface [private]¶

parallel_solvers.f90¶

Description

This module contains the routines that are used to solve linear banded problems with R-distributed arrays.

Quick access

Types:

type_penta_par, type_tri_par

Variables:

finalize_3, finalize_5, initialize_3, initialize_5, prepare_mat_3, prepare_mat_5, solver_dn_3, solver_dn_5, solver_finish_3, solver_finish_5, solver_single, solver_up_3, solver_up_5

Needed modules

  • precision_mod: This module controls the precision used in MagIC

  • parallel_mod: This module contains the blocking information

  • radial_data (n_r_cmb(), n_r_icb()): This module defines the MPI decomposition in the radial direction.

  • mem_alloc (bytes_allocated()): This little module is used to estimate the global memory allocation used in MagIC

  • constants (one()): module containing constants and parameters used in the code.

  • blocking (lm2l()): Module containing blocking information

  • truncation (lm_max()): This module defines the grid points and the truncation

Types

  • type  parallel_solvers/type_tri_par¶
    Type fields:
    • % diag (*,*) [real ,allocatable]

    • % lmax [integer ]

    • % lmin [integer ]

    • % low (*,*) [real ,allocatable]

    • % nrmax [integer ]

    • % nrmin [integer ]

    • % up (*,*) [real ,allocatable]

  • type  parallel_solvers/type_penta_par¶
    Type fields:
    • % diag (*,*) [real ,allocatable]

    • % lmax [integer ]

    • % lmin [integer ]

    • % low1 (*,*) [real ,allocatable]

    • % low2 (*,*) [real ,allocatable]

    • % nrmax [integer ]

    • % nrmin [integer ]

    • % up1 (*,*) [real ,allocatable]

    • % up2 (*,*) [real ,allocatable]

Variables

  • parallel_solvers/finalize_3 [private]¶
  • parallel_solvers/finalize_5 [private]¶
  • parallel_solvers/initialize_3 [private]¶
  • parallel_solvers/initialize_5 [private]¶
  • parallel_solvers/prepare_mat_3 [private]¶
  • parallel_solvers/prepare_mat_5 [private]¶
  • parallel_solvers/solver_dn_3 [private]¶
  • parallel_solvers/solver_dn_5 [private]¶
  • parallel_solvers/solver_finish_3 [private]¶
  • parallel_solvers/solver_finish_5 [private]¶
  • parallel_solvers/solver_single [private]¶

    Used for one single right hand side

  • parallel_solvers/solver_up_3 [private]¶
  • parallel_solvers/solver_up_5 [private]¶

Table of Contents

  • MPI related modules
    • parallel.f90
    • radial_data.f90
    • communications.f90
    • mpi_transpose.f90
    • parallel_solvers.f90

Previous topic

Base modules

Next topic

Code initialization

This Page

  • Show Source

Quick search

Navigation

  • index
  • fortran modules |
  • modules |
  • next |
  • previous |
  • Magic 6.3 documentation »
  • Description of the Fortran modules »
  • MPI related modules
© Copyright 2023, Thomas Gastine, Johannes Wicht, Ankit Barik, Lùcia Duarte. Created using Sphinx 8.1.3.