# HG changeset patch # User Markus Mützel # Date 1598557056 -7200 # Node ID efe91da1a169b43ad0def412249b1a89b4f4c408 # Parent 6bbd4df61efc429e826c162c136a599bbeeae523 Fix compiling ARPACK with gcc 10. * src/arpack-1-gcc10.patch: Cherry pick (part of) patch from upstream to (hopefully) fix compilation with gcc 10. * dist-files.mk: Add new file to list. diff -r 6bbd4df61efc -r efe91da1a169 dist-files.mk --- a/dist-files.mk Thu Aug 27 10:53:37 2020 -0400 +++ b/dist-files.mk Thu Aug 27 21:37:36 2020 +0200 @@ -7,6 +7,7 @@ armadillo-1-staticlib.patch \ armadillo-test.cpp \ armadillo.mk \ + arpack-1-gcc10.patch \ arpack.mk \ atk-1-DllMain.patch \ atk.mk \ diff -r 6bbd4df61efc -r efe91da1a169 src/arpack-1-gcc10.patch --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/src/arpack-1-gcc10.patch Thu Aug 27 21:37:36 2020 +0200 @@ -0,0 +1,3233 @@ +From ad82dcbc0beeed5616e2d5a28a089d9785f8b8b8 Mon Sep 17 00:00:00 2001 +From: dschwoerer +Date: Mon, 24 Feb 2020 09:05:43 +0000 +Subject: [PATCH] gcc-10 (parpack) and mpich (#245) + +* port PARPACK also to gcc 10 + +* Ensure that the output buffer is a rank-1 vector + +The vector should also not be a temporary, so that we +can use the result. + +* use valid address of binary + +openmpi ignores this error, but mpich doesn't + +* Add travis test with gcc 10 and mpich + +* After pulling fedora:rawhide, use fedora:rawhide + +* simplify travis tests for fedora + +* run using bash -v (permissions issue otherwise) + +* run using bash -v (permissions issue otherwise) + +* fix permission of script + +* Add she-bang to allow execution + +* fix test for fedora + +Co-authored-by: Sylvestre Ledru + +diff --git a/PARPACK/SRC/BLACS/pcgetv0.f b/PARPACK/SRC/BLACS/pcgetv0.f +index 0325fda3..191d70fd 100644 +--- a/PARPACK/SRC/BLACS/pcgetv0.f ++++ b/PARPACK/SRC/BLACS/pcgetv0.f +@@ -406,9 +406,9 @@ subroutine pcgetv0 + c %--------------------------------------% + c + if (msglvl .gt. 2) then +- call psvout (comm, logfil, 1, rnorm0, ndigit, ++ call psvout (comm, logfil, 1, [rnorm0], ndigit, + & '_getv0: re-orthonalization ; rnorm0 is') +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_getv0: re-orthonalization ; rnorm is') + end if + c +diff --git a/PARPACK/SRC/BLACS/pcnaitr.f b/PARPACK/SRC/BLACS/pcnaitr.f +index a9f17ed8..04fa1cbe 100644 +--- a/PARPACK/SRC/BLACS/pcnaitr.f ++++ b/PARPACK/SRC/BLACS/pcnaitr.f +@@ -401,9 +401,9 @@ subroutine pcnaitr + 1000 continue + c + if (msglvl .gt. 1) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: generating Arnoldi vector number') +- call pcvout (comm, logfil, 1, rnorm, ndigit, ++ call pcvout (comm, logfil, 1, [rnorm], ndigit, + & '_naitr: B-norm of the current residual is') + end if + c +@@ -423,7 +423,7 @@ subroutine pcnaitr + c %---------------------------------------------------% + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: ****** RESTART AT STEP ******') + end if + c +@@ -757,7 +757,7 @@ subroutine pcnaitr + end if + c + if (msglvl .gt. 0 .and. iter .gt. 0 ) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: Iterative refinement for Arnoldi residual') + if (msglvl .gt. 2) then + rtemp(1) = rnorm +diff --git a/PARPACK/SRC/BLACS/pcnapps.f b/PARPACK/SRC/BLACS/pcnapps.f +index 47fb7e70..e40793cf 100644 +--- a/PARPACK/SRC/BLACS/pcnapps.f ++++ b/PARPACK/SRC/BLACS/pcnapps.f +@@ -284,9 +284,9 @@ subroutine pcnapps + sigma = shift(jj) + c + if (msglvl .gt. 2 ) then +- call pivout (comm, logfil, 1, jj, ndigit, ++ call pivout (comm, logfil, 1, [jj], ndigit, + & '_napps: shift number.') +- call pcvout (comm, logfil, 1, sigma, ndigit, ++ call pcvout (comm, logfil, 1, [sigma], ndigit, + & '_napps: Value of the shift ') + end if + c +@@ -307,9 +307,9 @@ subroutine pcnapps + if ( abs(real(h(i+1,i))) + & .le. max(ulp*tst1, smlnum) ) then + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, i, ndigit, ++ call pivout (comm, logfil, 1, [i], ndigit, + & '_napps: matrix splitting at row/column no.') +- call pivout (comm, logfil, 1, jj, ndigit, ++ call pivout (comm, logfil, 1, [jj], ndigit, + & '_napps: matrix splitting with shift number.') + call pcvout (comm, logfil, 1, h(i+1,i), ndigit, + & '_napps: off diagonal element.') +@@ -323,9 +323,9 @@ subroutine pcnapps + 40 continue + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, istart, ndigit, ++ call pivout (comm, logfil, 1, [istart], ndigit, + & '_napps: Start of current block ') +- call pivout (comm, logfil, 1, iend, ndigit, ++ call pivout (comm, logfil, 1, [iend], ndigit, + & '_napps: End of current block ') + end if + c +@@ -501,7 +501,7 @@ subroutine pcnapps + & '_napps: sigmak = (e_{kev+p}^T*Q)*e_{kev}') + call pcvout (comm, logfil, 1, h(kev+1,kev), ndigit, + & '_napps: betak = e_{kev+1}^T*H*e_{kev}') +- call pivout (comm, logfil, 1, kev, ndigit, ++ call pivout (comm, logfil, 1, [kev], ndigit, + & '_napps: Order of the final Hessenberg matrix ') + if (msglvl .gt. 2) then + call pcmout (comm, logfil, kev, kev, h, ldh, ndigit, +diff --git a/PARPACK/SRC/BLACS/pcnaup2.f b/PARPACK/SRC/BLACS/pcnaup2.f +index 55868069..757b12c3 100644 +--- a/PARPACK/SRC/BLACS/pcnaup2.f ++++ b/PARPACK/SRC/BLACS/pcnaup2.f +@@ -398,7 +398,7 @@ subroutine pcnaup2 + iter = iter + 1 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, iter, ndigit, ++ call pivout (comm, logfil, 1, [iter], ndigit, + & '_naup2: **** Start of major iteration number ****') + end if + c +@@ -411,9 +411,9 @@ subroutine pcnaup2 + np = kplusp - nev + c + if (msglvl .gt. 1) then +- call pivout (comm, logfil, 1, nev, ndigit, ++ call pivout (comm, logfil, 1, [nev], ndigit, + & '_naup2: The length of the current Arnoldi factorization') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naup2: Extend the Arnoldi factorization by') + end if + c +@@ -440,7 +440,7 @@ subroutine pcnaup2 + update = .false. + c + if (msglvl .gt. 1) then +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_naup2: Corresponding B-norm of the residual') + end if + c +@@ -671,7 +671,7 @@ subroutine pcnaup2 + end if + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, nconv, ndigit, ++ call pivout (comm, logfil, 1, [nconv], ndigit, + & '_naup2: no. of "converged" Ritz values at this iter.') + if (msglvl .gt. 1) then + kp(1) = nev +@@ -711,7 +711,7 @@ subroutine pcnaup2 + end if + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naup2: The number of shifts to apply ') + call pcvout (comm, logfil, np, ritz, ndigit, + & '_naup2: values of the shifts') +@@ -776,7 +776,7 @@ subroutine pcnaup2 + cnorm = .false. + c + if (msglvl .gt. 2) then +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_naup2: B-norm of residual for compressed factorization') + call pcmout (comm, logfil, nev, nev, h, ldh, ndigit, + & '_naup2: Compressed upper Hessenberg matrix H') +diff --git a/PARPACK/SRC/BLACS/pcnaupd.f b/PARPACK/SRC/BLACS/pcnaupd.f +index b350199e..55bb655d 100644 +--- a/PARPACK/SRC/BLACS/pcnaupd.f ++++ b/PARPACK/SRC/BLACS/pcnaupd.f +@@ -618,9 +618,9 @@ subroutine pcnaupd + if (info .eq. 2) info = 3 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, mxiter, ndigit, ++ call pivout (comm, logfil, 1, [mxiter], ndigit, + & '_naupd: Number of update iterations taken') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naupd: Number of wanted "converged" Ritz values') + call pcvout (comm, logfil, np, workl(ritz), ndigit, + & '_naupd: The final Ritz values') +diff --git a/PARPACK/SRC/BLACS/pcneupd.f b/PARPACK/SRC/BLACS/pcneupd.f +index 53cf2d24..da4a9ec5 100644 +--- a/PARPACK/SRC/BLACS/pcneupd.f ++++ b/PARPACK/SRC/BLACS/pcneupd.f +@@ -558,9 +558,9 @@ subroutine pcneupd + c %-----------------------------------------------------------% + c + if (msglvl .gt. 2) then +- call pivout(comm, logfil, 1, numcnv, ndigit, ++ call pivout(comm, logfil, 1, [numcnv], ndigit, + & '_neupd: Number of specified eigenvalues') +- call pivout(comm, logfil, 1, nconv, ndigit, ++ call pivout(comm, logfil, 1, [nconv], ndigit, + & '_neupd: Number of "converged" eigenvalues') + end if + c +diff --git a/PARPACK/SRC/BLACS/pcngets.f b/PARPACK/SRC/BLACS/pcngets.f +index f9cca353..89cd67ae 100644 +--- a/PARPACK/SRC/BLACS/pcngets.f ++++ b/PARPACK/SRC/BLACS/pcngets.f +@@ -177,8 +177,8 @@ subroutine pcngets ( comm, ishift, which, kev, np, ritz, bounds) + tcgets = tcgets + (t1 - t0) + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, kev, ndigit, '_ngets: KEV is') +- call pivout (comm, logfil, 1, np, ndigit, '_ngets: NP is') ++ call pivout (comm, logfil, 1, [kev], ndigit, '_ngets: KEV is') ++ call pivout (comm, logfil, 1, [np], ndigit, '_ngets: NP is') + call pcvout (comm, logfil, kev+np, ritz, ndigit, + & '_ngets: Eigenvalues of current H matrix ') + call pcvout (comm, logfil, kev+np, bounds, ndigit, +diff --git a/PARPACK/SRC/BLACS/pdgetv0.f b/PARPACK/SRC/BLACS/pdgetv0.f +index 9c3a1d99..237443dd 100644 +--- a/PARPACK/SRC/BLACS/pdgetv0.f ++++ b/PARPACK/SRC/BLACS/pdgetv0.f +@@ -385,9 +385,9 @@ subroutine pdgetv0 + c %--------------------------------------% + c + if (msglvl .gt. 2) then +- call pdvout (comm, logfil, 1, rnorm0, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm0], ndigit, + & '_getv0: re-orthonalization ; rnorm0 is') +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_getv0: re-orthonalization ; rnorm is') + end if + c +@@ -418,7 +418,7 @@ subroutine pdgetv0 + 50 continue + c + if (msglvl .gt. 0) then +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_getv0: B-norm of initial / restarted starting vector') + end if + if (msglvl .gt. 2) then +diff --git a/PARPACK/SRC/BLACS/pdnaitr.f b/PARPACK/SRC/BLACS/pdnaitr.f +index f6557560..fb46494f 100644 +--- a/PARPACK/SRC/BLACS/pdnaitr.f ++++ b/PARPACK/SRC/BLACS/pdnaitr.f +@@ -390,9 +390,9 @@ subroutine pdnaitr + 1000 continue + c + if (msglvl .gt. 1) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: generating Arnoldi vector number') +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_naitr: B-norm of the current residual is') + end if + c +@@ -412,7 +412,7 @@ subroutine pdnaitr + c %---------------------------------------------------% + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: ****** RESTART AT STEP ******') + end if + c +@@ -745,7 +745,7 @@ subroutine pdnaitr + end if + c + if (msglvl .gt. 0 .and. iter .gt. 0) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: Iterative refinement for Arnoldi residual') + if (msglvl .gt. 2) then + xtemp(1) = rnorm +diff --git a/PARPACK/SRC/BLACS/pdnapps.f b/PARPACK/SRC/BLACS/pdnapps.f +index 56e3414d..eadca320 100644 +--- a/PARPACK/SRC/BLACS/pdnapps.f ++++ b/PARPACK/SRC/BLACS/pdnapps.f +@@ -276,11 +276,11 @@ subroutine pdnapps + sigmai = shifti(jj) + c + if (msglvl .gt. 2 ) then +- call pivout (comm, logfil, 1, jj, ndigit, ++ call pivout (comm, logfil, 1, [jj], ndigit, + & '_napps: shift number.') +- call pdvout (comm, logfil, 1, sigmar, ndigit, ++ call pdvout (comm, logfil, 1, [sigmar], ndigit, + & '_napps: The real part of the shift ') +- call pdvout (comm, logfil, 1, sigmai, ndigit, ++ call pdvout (comm, logfil, 1, [sigmai], ndigit, + & '_napps: The imaginary part of the shift ') + end if + c +@@ -347,7 +347,7 @@ subroutine pdnapps + if (msglvl .gt. 0) then + call pivout (comm, logfil, 1, i, ndigit, + & '_napps: matrix splitting at row/column no.') +- call pivout (comm, logfil, 1, jj, ndigit, ++ call pivout (comm, logfil, 1, [jj], ndigit, + & '_napps: matrix splitting with shift number.') + call pdvout (comm, logfil, 1, h(i+1,i), ndigit, + & '_napps: off diagonal element.') +@@ -361,9 +361,9 @@ subroutine pdnapps + 40 continue + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, istart, ndigit, ++ call pivout (comm, logfil, 1, [istart], ndigit, + & '_napps: Start of current block ') +- call pivout (comm, logfil, 1, iend, ndigit, ++ call pivout (comm, logfil, 1, [iend], ndigit, + & '_napps: End of current block ') + end if + c +@@ -635,7 +635,7 @@ subroutine pdnapps + & '_napps: sigmak = (e_{kev+p}^T*Q)*e_{kev}') + call pdvout (comm, logfil, 1, h(kev+1,kev), ndigit, + & '_napps: betak = e_{kev+1}^T*H*e_{kev}') +- call pivout (comm, logfil, 1, kev, ndigit, ++ call pivout (comm, logfil, 1, [kev], ndigit, + & '_napps: Order of the final Hessenberg matrix ') + if (msglvl .gt. 2) then + call pdmout (comm, logfil, kev, kev, h, ldh, ndigit, +diff --git a/PARPACK/SRC/BLACS/pdnaup2.f b/PARPACK/SRC/BLACS/pdnaup2.f +index becea72d..a295dbbb 100644 +--- a/PARPACK/SRC/BLACS/pdnaup2.f ++++ b/PARPACK/SRC/BLACS/pdnaup2.f +@@ -405,7 +405,7 @@ subroutine pdnaup2 + iter = iter + 1 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, iter, ndigit, ++ call pivout (comm, logfil, 1, [iter], ndigit, + & '_naup2: **** Start of major iteration number ****') + end if + c +@@ -418,9 +418,9 @@ subroutine pdnaup2 + np = kplusp - nev + c + if (msglvl .gt. 1) then +- call pivout (comm, logfil, 1, nev, ndigit, ++ call pivout (comm, logfil, 1, [nev], ndigit, + & '_naup2: The length of the current Arnoldi factorization') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naup2: Extend the Arnoldi factorization by') + end if + c +@@ -452,7 +452,7 @@ subroutine pdnaup2 + update = .false. + c + if (msglvl .gt. 1) then +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_naup2: Corresponding B-norm of the residual') + end if + c +@@ -696,7 +696,7 @@ subroutine pdnaup2 + end if + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, nconv, ndigit, ++ call pivout (comm, logfil, 1, [nconv], ndigit, + & '_naup2: no. of "converged" Ritz values at this iter.') + if (msglvl .gt. 1) then + kp(1) = nev +@@ -748,7 +748,7 @@ subroutine pdnaup2 + end if + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naup2: The number of shifts to apply ') + call pdvout (comm, logfil, np, ritzr, ndigit, + & '_naup2: Real part of the shifts') +@@ -815,7 +815,7 @@ subroutine pdnaup2 + cnorm = .false. + c + if (msglvl .gt. 2) then +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_naup2: B-norm of residual for compressed factorization') + call pdmout (comm, logfil, nev, nev, h, ldh, ndigit, + & '_naup2: Compressed upper Hessenberg matrix H') +diff --git a/PARPACK/SRC/BLACS/pdnaupd.f b/PARPACK/SRC/BLACS/pdnaupd.f +index d947755b..ed0fa20f 100644 +--- a/PARPACK/SRC/BLACS/pdnaupd.f ++++ b/PARPACK/SRC/BLACS/pdnaupd.f +@@ -642,9 +642,9 @@ subroutine pdnaupd + if (info .eq. 2) info = 3 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, mxiter, ndigit, ++ call pivout (comm, logfil, 1, [mxiter], ndigit, + & '_naupd: Number of update iterations taken') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naupd: Number of wanted "converged" Ritz values') + call pdvout (comm, logfil, np, workl(ritzr), ndigit, + & '_naupd: Real part of the final Ritz values') +diff --git a/PARPACK/SRC/BLACS/pdneupd.f b/PARPACK/SRC/BLACS/pdneupd.f +index 0ff911ff..321202f9 100644 +--- a/PARPACK/SRC/BLACS/pdneupd.f ++++ b/PARPACK/SRC/BLACS/pdneupd.f +@@ -617,9 +617,9 @@ subroutine pdneupd + c %-----------------------------------------------------------% + c + if (msglvl .gt. 2) then +- call pivout(comm, logfil, 1, numcnv, ndigit, ++ call pivout(comm, logfil, 1, [numcnv], ndigit, + & '_neupd: Number of specified eigenvalues') +- call pivout(comm, logfil, 1, nconv, ndigit, ++ call pivout(comm, logfil, 1, [nconv], ndigit, + & '_neupd: Number of "converged" eigenvalues') + end if + c +diff --git a/PARPACK/SRC/BLACS/pdngets.f b/PARPACK/SRC/BLACS/pdngets.f +index 71ed6a7c..12a691f6 100644 +--- a/PARPACK/SRC/BLACS/pdngets.f ++++ b/PARPACK/SRC/BLACS/pdngets.f +@@ -226,8 +226,8 @@ subroutine pdngets + tngets = tngets + (t1 - t0) + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, kev, ndigit, '_ngets: KEV is') +- call pivout (comm, logfil, 1, np, ndigit, '_ngets: NP is') ++ call pivout (comm, logfil, 1, [kev], ndigit, '_ngets: KEV is') ++ call pivout (comm, logfil, 1, [np], ndigit, '_ngets: NP is') + call pdvout (comm, logfil, kev+np, ritzr, ndigit, + & '_ngets: Eigenvalues of current H matrix -- real part') + call pdvout (comm, logfil, kev+np, ritzi, ndigit, +diff --git a/PARPACK/SRC/BLACS/pdsaitr.f b/PARPACK/SRC/BLACS/pdsaitr.f +index 37ca61e8..9a2cdfc1 100644 +--- a/PARPACK/SRC/BLACS/pdsaitr.f ++++ b/PARPACK/SRC/BLACS/pdsaitr.f +@@ -389,9 +389,9 @@ subroutine pdsaitr + 1000 continue + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_saitr: generating Arnoldi vector no.') +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_saitr: B-norm of the current residual =') + end if + c +@@ -409,7 +409,7 @@ subroutine pdsaitr + c %---------------------------------------------------% + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_saitr: ****** restart at step ******') + end if + c +@@ -767,7 +767,7 @@ subroutine pdsaitr + end if + c + if (msglvl .gt. 0 .and. iter .gt. 0) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: Iterative refinement for Arnoldi residual') + if (msglvl .gt. 2) then + xtemp(1) = rnorm +diff --git a/PARPACK/SRC/BLACS/pdsapps.f b/PARPACK/SRC/BLACS/pdsapps.f +index b3a05cf4..4e2c0760 100644 +--- a/PARPACK/SRC/BLACS/pdsapps.f ++++ b/PARPACK/SRC/BLACS/pdsapps.f +@@ -272,9 +272,9 @@ subroutine pdsapps + big = abs(h(i,2)) + abs(h(i+1,2)) + if (h(i+1,1) .le. epsmch*big) then + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, i, ndigit, ++ call pivout (comm, logfil, 1, [i], ndigit, + & '_sapps: deflation at row/column no.') +- call pivout (comm, logfil, 1, jj, ndigit, ++ call pivout (comm, logfil, 1, [jj], ndigit, + & '_sapps: occurred before shift number.') + call pdvout (comm, logfil, 1, h(i+1,1), ndigit, + & '_sapps: the corresponding off diagonal element') +@@ -443,7 +443,7 @@ subroutine pdsapps + big = abs(h(i,2)) + abs(h(i+1,2)) + if (h(i+1,1) .le. epsmch*big) then + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, i, ndigit, ++ call pivout (comm, logfil, 1, [i], ndigit, + & '_sapps: deflation at row/column no.') + call pdvout (comm, logfil, 1, h(i+1,1), ndigit, + & '_sapps: the corresponding off diagonal element') +diff --git a/PARPACK/SRC/BLACS/pdsaup2.f b/PARPACK/SRC/BLACS/pdsaup2.f +index cf934016..599aad62 100644 +--- a/PARPACK/SRC/BLACS/pdsaup2.f ++++ b/PARPACK/SRC/BLACS/pdsaup2.f +@@ -421,13 +421,13 @@ subroutine pdsaup2 + iter = iter + 1 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, iter, ndigit, ++ call pivout (comm, logfil, 1, [iter], ndigit, + & '_saup2: **** Start of major iteration number ****') + end if + if (msglvl .gt. 1) then +- call pivout (comm, logfil, 1, nev, ndigit, ++ call pivout (comm, logfil, 1, [nev], ndigit, + & '_saup2: The length of the current Lanczos factorization') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_saup2: Extend the Lanczos factorization by') + end if + c +@@ -466,7 +466,7 @@ subroutine pdsaup2 + update = .false. + c + if (msglvl .gt. 1) then +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_saup2: Current B-norm of residual for factorization') + end if + c +@@ -716,7 +716,7 @@ subroutine pdsaup2 + end if + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, nconv, ndigit, ++ call pivout (comm, logfil, 1, [nconv], ndigit, + & '_saup2: no. of "converged" Ritz values at this iter.') + if (msglvl .gt. 1) then + kp(1) = nev +@@ -763,7 +763,7 @@ subroutine pdsaup2 + if (ishift .eq. 0) call dcopy (np, workl, 1, ritz, 1) + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_saup2: The number of shifts to apply ') + call pdvout (comm, logfil, np, workl, ndigit, + & '_saup2: shifts selected') +@@ -831,7 +831,7 @@ subroutine pdsaup2 + 130 continue + c + if (msglvl .gt. 2) then +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_saup2: B-norm of residual for NEV factorization') + call pdvout (comm, logfil, nev, h(1,2), ndigit, + & '_saup2: main diagonal of compressed H matrix') +diff --git a/PARPACK/SRC/BLACS/pdsaupd.f b/PARPACK/SRC/BLACS/pdsaupd.f +index 5f606d59..cb021128 100644 +--- a/PARPACK/SRC/BLACS/pdsaupd.f ++++ b/PARPACK/SRC/BLACS/pdsaupd.f +@@ -644,9 +644,9 @@ subroutine pdsaupd + if (info .eq. 2) info = 3 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, mxiter, ndigit, ++ call pivout (comm, logfil, 1, [mxiter], ndigit, + & '_saupd: number of update iterations taken') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_saupd: number of "converged" Ritz values') + call pdvout (comm, logfil, np, workl(Ritz), ndigit, + & '_saupd: final Ritz values') +diff --git a/PARPACK/SRC/BLACS/pdseupd.f b/PARPACK/SRC/BLACS/pdseupd.f +index 956891cd..074b195b 100644 +--- a/PARPACK/SRC/BLACS/pdseupd.f ++++ b/PARPACK/SRC/BLACS/pdseupd.f +@@ -523,9 +523,9 @@ subroutine pdseupd + c %-----------------------------------------------------------% + c + if (msglvl .gt. 2) then +- call pivout(comm, logfil, 1, numcnv, ndigit, ++ call pivout(comm, logfil, 1, [numcnv], ndigit, + & '_neupd: Number of specified eigenvalues') +- call pivout(comm, logfil, 1, nconv, ndigit, ++ call pivout(comm, logfil, 1, [nconv], ndigit, + & '_neupd: Number of "converged" eigenvalues') + end if + c +diff --git a/PARPACK/SRC/BLACS/pdsgets.f b/PARPACK/SRC/BLACS/pdsgets.f +index aa549a25..d0f703bb 100644 +--- a/PARPACK/SRC/BLACS/pdsgets.f ++++ b/PARPACK/SRC/BLACS/pdsgets.f +@@ -216,8 +216,8 @@ subroutine pdsgets + tsgets = tsgets + (t1 - t0) + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, kev, ndigit, '_sgets: KEV is') +- call pivout (comm, logfil, 1, np, ndigit, '_sgets: NP is') ++ call pivout (comm, logfil, 1, [kev], ndigit, '_sgets: KEV is') ++ call pivout (comm, logfil, 1, [np], ndigit, '_sgets: NP is') + call pdvout (comm, logfil, kev+np, ritz, ndigit, + & '_sgets: Eigenvalues of current H matrix') + call pdvout (comm, logfil, kev+np, bounds, ndigit, +diff --git a/PARPACK/SRC/BLACS/psgetv0.f b/PARPACK/SRC/BLACS/psgetv0.f +index 9862d055..d48cb6db 100644 +--- a/PARPACK/SRC/BLACS/psgetv0.f ++++ b/PARPACK/SRC/BLACS/psgetv0.f +@@ -385,9 +385,9 @@ subroutine psgetv0 + c %--------------------------------------% + c + if (msglvl .gt. 2) then +- call psvout (comm, logfil, 1, rnorm0, ndigit, ++ call psvout (comm, logfil, 1, [rnorm0], ndigit, + & '_getv0: re-orthonalization ; rnorm0 is') +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_getv0: re-orthonalization ; rnorm is') + end if + c +@@ -418,7 +418,7 @@ subroutine psgetv0 + 50 continue + c + if (msglvl .gt. 0) then +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_getv0: B-norm of initial / restarted starting vector') + end if + if (msglvl .gt. 2) then +diff --git a/PARPACK/SRC/BLACS/psnaitr.f b/PARPACK/SRC/BLACS/psnaitr.f +index 07aa0526..bbd2809c 100644 +--- a/PARPACK/SRC/BLACS/psnaitr.f ++++ b/PARPACK/SRC/BLACS/psnaitr.f +@@ -390,9 +390,9 @@ subroutine psnaitr + 1000 continue + c + if (msglvl .gt. 1) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: generating Arnoldi vector number') +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_naitr: B-norm of the current residual is') + end if + c +@@ -412,7 +412,7 @@ subroutine psnaitr + c %---------------------------------------------------% + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: ****** RESTART AT STEP ******') + end if + c +@@ -745,7 +745,7 @@ subroutine psnaitr + end if + c + if (msglvl .gt. 0 .and. iter .gt. 0) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: Iterative refinement for Arnoldi residual') + if (msglvl .gt. 2) then + xtemp(1) = rnorm +diff --git a/PARPACK/SRC/BLACS/psnapps.f b/PARPACK/SRC/BLACS/psnapps.f +index a515d0cd..ba668bba 100644 +--- a/PARPACK/SRC/BLACS/psnapps.f ++++ b/PARPACK/SRC/BLACS/psnapps.f +@@ -276,11 +276,11 @@ subroutine psnapps + sigmai = shifti(jj) + c + if (msglvl .gt. 2 ) then +- call pivout (comm, logfil, 1, jj, ndigit, ++ call pivout (comm, logfil, 1, [jj], ndigit, + & '_napps: shift number.') +- call psvout (comm, logfil, 1, sigmar, ndigit, ++ call psvout (comm, logfil, 1, [sigmar], ndigit, + & '_napps: The real part of the shift ') +- call psvout (comm, logfil, 1, sigmai, ndigit, ++ call psvout (comm, logfil, 1, [sigmai], ndigit, + & '_napps: The imaginary part of the shift ') + end if + c +@@ -347,7 +347,7 @@ subroutine psnapps + if (msglvl .gt. 0) then + call pivout (comm, logfil, 1, i, ndigit, + & '_napps: matrix splitting at row/column no.') +- call pivout (comm, logfil, 1, jj, ndigit, ++ call pivout (comm, logfil, 1, [jj], ndigit, + & '_napps: matrix splitting with shift number.') + call psvout (comm, logfil, 1, h(i+1,i), ndigit, + & '_napps: off diagonal element.') +@@ -361,9 +361,9 @@ subroutine psnapps + 40 continue + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, istart, ndigit, ++ call pivout (comm, logfil, 1, [istart], ndigit, + & '_napps: Start of current block ') +- call pivout (comm, logfil, 1, iend, ndigit, ++ call pivout (comm, logfil, 1, [iend], ndigit, + & '_napps: End of current block ') + end if + c +@@ -635,7 +635,7 @@ subroutine psnapps + & '_napps: sigmak = (e_{kev+p}^T*Q)*e_{kev}') + call psvout (comm, logfil, 1, h(kev+1,kev), ndigit, + & '_napps: betak = e_{kev+1}^T*H*e_{kev}') +- call pivout (comm, logfil, 1, kev, ndigit, ++ call pivout (comm, logfil, 1, [kev], ndigit, + & '_napps: Order of the final Hessenberg matrix ') + if (msglvl .gt. 2) then + call psmout (comm, logfil, kev, kev, h, ldh, ndigit, +diff --git a/PARPACK/SRC/BLACS/psnaup2.f b/PARPACK/SRC/BLACS/psnaup2.f +index eff2a136..e4603273 100644 +--- a/PARPACK/SRC/BLACS/psnaup2.f ++++ b/PARPACK/SRC/BLACS/psnaup2.f +@@ -405,7 +405,7 @@ subroutine psnaup2 + iter = iter + 1 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, iter, ndigit, ++ call pivout (comm, logfil, 1, [iter], ndigit, + & '_naup2: **** Start of major iteration number ****') + end if + c +@@ -418,9 +418,9 @@ subroutine psnaup2 + np = kplusp - nev + c + if (msglvl .gt. 1) then +- call pivout (comm, logfil, 1, nev, ndigit, ++ call pivout (comm, logfil, 1, [nev], ndigit, + & '_naup2: The length of the current Arnoldi factorization') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naup2: Extend the Arnoldi factorization by') + end if + c +@@ -452,7 +452,7 @@ subroutine psnaup2 + update = .false. + c + if (msglvl .gt. 1) then +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_naup2: Corresponding B-norm of the residual') + end if + c +@@ -696,7 +696,7 @@ subroutine psnaup2 + end if + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, nconv, ndigit, ++ call pivout (comm, logfil, 1, [nconv], ndigit, + & '_naup2: no. of "converged" Ritz values at this iter.') + if (msglvl .gt. 1) then + kp(1) = nev +@@ -748,7 +748,7 @@ subroutine psnaup2 + end if + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naup2: The number of shifts to apply ') + call psvout (comm, logfil, np, ritzr, ndigit, + & '_naup2: Real part of the shifts') +@@ -815,7 +815,7 @@ subroutine psnaup2 + cnorm = .false. + c + if (msglvl .gt. 2) then +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_naup2: B-norm of residual for compressed factorization') + call psmout (comm, logfil, nev, nev, h, ldh, ndigit, + & '_naup2: Compressed upper Hessenberg matrix H') +diff --git a/PARPACK/SRC/BLACS/psnaupd.f b/PARPACK/SRC/BLACS/psnaupd.f +index 55f34ce6..4f2484f9 100644 +--- a/PARPACK/SRC/BLACS/psnaupd.f ++++ b/PARPACK/SRC/BLACS/psnaupd.f +@@ -642,9 +642,9 @@ subroutine psnaupd + if (info .eq. 2) info = 3 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, mxiter, ndigit, ++ call pivout (comm, logfil, 1, [mxiter], ndigit, + & '_naupd: Number of update iterations taken') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naupd: Number of wanted "converged" Ritz values') + call psvout (comm, logfil, np, workl(ritzr), ndigit, + & '_naupd: Real part of the final Ritz values') +diff --git a/PARPACK/SRC/BLACS/psneupd.f b/PARPACK/SRC/BLACS/psneupd.f +index 49847ffe..cbdaba28 100644 +--- a/PARPACK/SRC/BLACS/psneupd.f ++++ b/PARPACK/SRC/BLACS/psneupd.f +@@ -617,9 +617,9 @@ subroutine psneupd + c %-----------------------------------------------------------% + c + if (msglvl .gt. 2) then +- call pivout(comm, logfil, 1, numcnv, ndigit, ++ call pivout(comm, logfil, 1, [numcnv], ndigit, + & '_neupd: Number of specified eigenvalues') +- call pivout(comm, logfil, 1, nconv, ndigit, ++ call pivout(comm, logfil, 1, [nconv], ndigit, + & '_neupd: Number of "converged" eigenvalues') + end if + c +diff --git a/PARPACK/SRC/BLACS/psngets.f b/PARPACK/SRC/BLACS/psngets.f +index c0e30886..33d85dfe 100644 +--- a/PARPACK/SRC/BLACS/psngets.f ++++ b/PARPACK/SRC/BLACS/psngets.f +@@ -226,8 +226,8 @@ subroutine psngets + tngets = tngets + (t1 - t0) + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, kev, ndigit, '_ngets: KEV is') +- call pivout (comm, logfil, 1, np, ndigit, '_ngets: NP is') ++ call pivout (comm, logfil, 1, [kev], ndigit, '_ngets: KEV is') ++ call pivout (comm, logfil, 1, [np], ndigit, '_ngets: NP is') + call psvout (comm, logfil, kev+np, ritzr, ndigit, + & '_ngets: Eigenvalues of current H matrix -- real part') + call psvout (comm, logfil, kev+np, ritzi, ndigit, +diff --git a/PARPACK/SRC/BLACS/pssaitr.f b/PARPACK/SRC/BLACS/pssaitr.f +index 50816c4f..e57864a5 100644 +--- a/PARPACK/SRC/BLACS/pssaitr.f ++++ b/PARPACK/SRC/BLACS/pssaitr.f +@@ -389,9 +389,9 @@ subroutine pssaitr + 1000 continue + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_saitr: generating Arnoldi vector no.') +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_saitr: B-norm of the current residual =') + end if + c +@@ -409,7 +409,7 @@ subroutine pssaitr + c %---------------------------------------------------% + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_saitr: ****** restart at step ******') + end if + c +@@ -767,7 +767,7 @@ subroutine pssaitr + end if + c + if (msglvl .gt. 0 .and. iter .gt. 0) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: Iterative refinement for Arnoldi residual') + if (msglvl .gt. 2) then + xtemp(1) = rnorm +diff --git a/PARPACK/SRC/BLACS/pssapps.f b/PARPACK/SRC/BLACS/pssapps.f +index 7c8465b3..5198a734 100644 +--- a/PARPACK/SRC/BLACS/pssapps.f ++++ b/PARPACK/SRC/BLACS/pssapps.f +@@ -272,9 +272,9 @@ subroutine pssapps + big = abs(h(i,2)) + abs(h(i+1,2)) + if (h(i+1,1) .le. epsmch*big) then + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, i, ndigit, ++ call pivout (comm, logfil, 1, [i], ndigit, + & '_sapps: deflation at row/column no.') +- call pivout (comm, logfil, 1, jj, ndigit, ++ call pivout (comm, logfil, 1, [jj], ndigit, + & '_sapps: occurred before shift number.') + call psvout (comm, logfil, 1, h(i+1,1), ndigit, + & '_sapps: the corresponding off diagonal element') +@@ -443,7 +443,7 @@ subroutine pssapps + big = abs(h(i,2)) + abs(h(i+1,2)) + if (h(i+1,1) .le. epsmch*big) then + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, i, ndigit, ++ call pivout (comm, logfil, 1, [i], ndigit, + & '_sapps: deflation at row/column no.') + call psvout (comm, logfil, 1, h(i+1,1), ndigit, + & '_sapps: the corresponding off diagonal element') +diff --git a/PARPACK/SRC/BLACS/pssaup2.f b/PARPACK/SRC/BLACS/pssaup2.f +index 57ab391b..87cc3a81 100644 +--- a/PARPACK/SRC/BLACS/pssaup2.f ++++ b/PARPACK/SRC/BLACS/pssaup2.f +@@ -421,13 +421,13 @@ subroutine pssaup2 + iter = iter + 1 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, iter, ndigit, ++ call pivout (comm, logfil, 1, [iter], ndigit, + & '_saup2: **** Start of major iteration number ****') + end if + if (msglvl .gt. 1) then +- call pivout (comm, logfil, 1, nev, ndigit, ++ call pivout (comm, logfil, 1, [nev], ndigit, + & '_saup2: The length of the current Lanczos factorization') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_saup2: Extend the Lanczos factorization by') + end if + c +@@ -466,7 +466,7 @@ subroutine pssaup2 + update = .false. + c + if (msglvl .gt. 1) then +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_saup2: Current B-norm of residual for factorization') + end if + c +@@ -716,7 +716,7 @@ subroutine pssaup2 + end if + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, nconv, ndigit, ++ call pivout (comm, logfil, 1, [nconv], ndigit, + & '_saup2: no. of "converged" Ritz values at this iter.') + if (msglvl .gt. 1) then + kp(1) = nev +@@ -763,7 +763,7 @@ subroutine pssaup2 + if (ishift .eq. 0) call scopy (np, workl, 1, ritz, 1) + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_saup2: The number of shifts to apply ') + call psvout (comm, logfil, np, workl, ndigit, + & '_saup2: shifts selected') +@@ -831,7 +831,7 @@ subroutine pssaup2 + 130 continue + c + if (msglvl .gt. 2) then +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_saup2: B-norm of residual for NEV factorization') + call psvout (comm, logfil, nev, h(1,2), ndigit, + & '_saup2: main diagonal of compressed H matrix') +diff --git a/PARPACK/SRC/BLACS/pssaupd.f b/PARPACK/SRC/BLACS/pssaupd.f +index 1b276e86..9d6061c1 100644 +--- a/PARPACK/SRC/BLACS/pssaupd.f ++++ b/PARPACK/SRC/BLACS/pssaupd.f +@@ -644,9 +644,9 @@ subroutine pssaupd + if (info .eq. 2) info = 3 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, mxiter, ndigit, ++ call pivout (comm, logfil, 1, [mxiter], ndigit, + & '_saupd: number of update iterations taken') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_saupd: number of "converged" Ritz values') + call psvout (comm, logfil, np, workl(Ritz), ndigit, + & '_saupd: final Ritz values') +diff --git a/PARPACK/SRC/BLACS/psseupd.f b/PARPACK/SRC/BLACS/psseupd.f +index 68edab15..e8c910e8 100644 +--- a/PARPACK/SRC/BLACS/psseupd.f ++++ b/PARPACK/SRC/BLACS/psseupd.f +@@ -523,9 +523,9 @@ subroutine psseupd + c %-----------------------------------------------------------% + c + if (msglvl .gt. 2) then +- call pivout(comm, logfil, 1, numcnv, ndigit, ++ call pivout(comm, logfil, 1, [numcnv], ndigit, + & '_neupd: Number of specified eigenvalues') +- call pivout(comm, logfil, 1, nconv, ndigit, ++ call pivout(comm, logfil, 1, [nconv], ndigit, + & '_neupd: Number of "converged" eigenvalues') + end if + c +diff --git a/PARPACK/SRC/BLACS/pssgets.f b/PARPACK/SRC/BLACS/pssgets.f +index d138282a..660c274f 100644 +--- a/PARPACK/SRC/BLACS/pssgets.f ++++ b/PARPACK/SRC/BLACS/pssgets.f +@@ -216,8 +216,8 @@ subroutine pssgets + tsgets = tsgets + (t1 - t0) + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, kev, ndigit, '_sgets: KEV is') +- call pivout (comm, logfil, 1, np, ndigit, '_sgets: NP is') ++ call pivout (comm, logfil, 1, [kev], ndigit, '_sgets: KEV is') ++ call pivout (comm, logfil, 1, [np], ndigit, '_sgets: NP is') + call psvout (comm, logfil, kev+np, ritz, ndigit, + & '_sgets: Eigenvalues of current H matrix') + call psvout (comm, logfil, kev+np, bounds, ndigit, +diff --git a/PARPACK/SRC/BLACS/pzgetv0.f b/PARPACK/SRC/BLACS/pzgetv0.f +index 09723313..c1d173f0 100644 +--- a/PARPACK/SRC/BLACS/pzgetv0.f ++++ b/PARPACK/SRC/BLACS/pzgetv0.f +@@ -406,9 +406,9 @@ subroutine pzgetv0 + c %--------------------------------------% + c + if (msglvl .gt. 2) then +- call pdvout (comm, logfil, 1, rnorm0, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm0], ndigit, + & '_getv0: re-orthonalization ; rnorm0 is') +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_getv0: re-orthonalization ; rnorm is') + end if + c +@@ -440,7 +440,7 @@ subroutine pzgetv0 + c + if (msglvl .gt. 0) then + cnorm2 = dcmplx (rnorm,rzero) +- call pzvout (comm, logfil, 1, cnorm2, ndigit, ++ call pzvout (comm, logfil, 1, [cnorm2], ndigit, + & '_getv0: B-norm of initial / restarted starting vector') + end if + if (msglvl .gt. 2) then +diff --git a/PARPACK/SRC/BLACS/pznaitr.f b/PARPACK/SRC/BLACS/pznaitr.f +index 61f467a2..92db7fe7 100644 +--- a/PARPACK/SRC/BLACS/pznaitr.f ++++ b/PARPACK/SRC/BLACS/pznaitr.f +@@ -401,9 +401,9 @@ subroutine pznaitr + 1000 continue + c + if (msglvl .gt. 1) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: generating Arnoldi vector number') +- call pzvout (comm, logfil, 1, rnorm, ndigit, ++ call pzvout (comm, logfil, 1, [rnorm], ndigit, + & '_naitr: B-norm of the current residual is') + end if + c +@@ -423,7 +423,7 @@ subroutine pznaitr + c %---------------------------------------------------% + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: ****** RESTART AT STEP ******') + end if + c +@@ -757,7 +757,7 @@ subroutine pznaitr + end if + c + if (msglvl .gt. 0 .and. iter .gt. 0 ) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: Iterative refinement for Arnoldi residual') + if (msglvl .gt. 2) then + rtemp(1) = rnorm +diff --git a/PARPACK/SRC/BLACS/pznapps.f b/PARPACK/SRC/BLACS/pznapps.f +index 796f4ded..155038d1 100644 +--- a/PARPACK/SRC/BLACS/pznapps.f ++++ b/PARPACK/SRC/BLACS/pznapps.f +@@ -284,9 +284,9 @@ subroutine pznapps + sigma = shift(jj) + c + if (msglvl .gt. 2 ) then +- call pivout (comm, logfil, 1, jj, ndigit, ++ call pivout (comm, logfil, 1, [jj], ndigit, + & '_napps: shift number.') +- call pzvout (comm, logfil, 1, sigma, ndigit, ++ call pzvout (comm, logfil, 1, [sigma], ndigit, + & '_napps: Value of the shift ') + end if + c +@@ -307,9 +307,9 @@ subroutine pznapps + if ( abs(dble(h(i+1,i))) + & .le. max(ulp*tst1, smlnum) ) then + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, i, ndigit, ++ call pivout (comm, logfil, 1, [i], ndigit, + & '_napps: matrix splitting at row/column no.') +- call pivout (comm, logfil, 1, jj, ndigit, ++ call pivout (comm, logfil, 1, [jj], ndigit, + & '_napps: matrix splitting with shift number.') + call pzvout (comm, logfil, 1, h(i+1,i), ndigit, + & '_napps: off diagonal element.') +@@ -323,9 +323,9 @@ subroutine pznapps + 40 continue + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, istart, ndigit, ++ call pivout (comm, logfil, 1, [istart], ndigit, + & '_napps: Start of current block ') +- call pivout (comm, logfil, 1, iend, ndigit, ++ call pivout (comm, logfil, 1, [iend], ndigit, + & '_napps: End of current block ') + end if + c +@@ -501,7 +501,7 @@ subroutine pznapps + & '_napps: sigmak = (e_{kev+p}^T*Q)*e_{kev}') + call pzvout (comm, logfil, 1, h(kev+1,kev), ndigit, + & '_napps: betak = e_{kev+1}^T*H*e_{kev}') +- call pivout (comm, logfil, 1, kev, ndigit, ++ call pivout (comm, logfil, 1, [kev], ndigit, + & '_napps: Order of the final Hessenberg matrix ') + if (msglvl .gt. 2) then + call pzmout (comm, logfil, kev, kev, h, ldh, ndigit, +diff --git a/PARPACK/SRC/BLACS/pznaup2.f b/PARPACK/SRC/BLACS/pznaup2.f +index 22b46dd2..1610a588 100644 +--- a/PARPACK/SRC/BLACS/pznaup2.f ++++ b/PARPACK/SRC/BLACS/pznaup2.f +@@ -398,7 +398,7 @@ subroutine pznaup2 + iter = iter + 1 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, iter, ndigit, ++ call pivout (comm, logfil, 1, [iter], ndigit, + & '_naup2: **** Start of major iteration number ****') + end if + c +@@ -411,9 +411,9 @@ subroutine pznaup2 + np = kplusp - nev + c + if (msglvl .gt. 1) then +- call pivout (comm, logfil, 1, nev, ndigit, ++ call pivout (comm, logfil, 1, [nev], ndigit, + & '_naup2: The length of the current Arnoldi factorization') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naup2: Extend the Arnoldi factorization by') + end if + c +@@ -440,7 +440,7 @@ subroutine pznaup2 + update = .false. + c + if (msglvl .gt. 1) then +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_naup2: Corresponding B-norm of the residual') + end if + c +@@ -671,7 +671,7 @@ subroutine pznaup2 + end if + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, nconv, ndigit, ++ call pivout (comm, logfil, 1, [nconv], ndigit, + & '_naup2: no. of "converged" Ritz values at this iter.') + if (msglvl .gt. 1) then + kp(1) = nev +@@ -711,7 +711,7 @@ subroutine pznaup2 + end if + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naup2: The number of shifts to apply ') + call pzvout (comm, logfil, np, ritz, ndigit, + & '_naup2: values of the shifts') +@@ -776,7 +776,7 @@ subroutine pznaup2 + cnorm = .false. + c + if (msglvl .gt. 2) then +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_naup2: B-norm of residual for compressed factorization') + call pzmout (comm, logfil, nev, nev, h, ldh, ndigit, + & '_naup2: Compressed upper Hessenberg matrix H') +diff --git a/PARPACK/SRC/BLACS/pznaupd.f b/PARPACK/SRC/BLACS/pznaupd.f +index 0bf616f9..b46679a6 100644 +--- a/PARPACK/SRC/BLACS/pznaupd.f ++++ b/PARPACK/SRC/BLACS/pznaupd.f +@@ -618,9 +618,9 @@ subroutine pznaupd + if (info .eq. 2) info = 3 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, mxiter, ndigit, ++ call pivout (comm, logfil, 1, [mxiter], ndigit, + & '_naupd: Number of update iterations taken') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naupd: Number of wanted "converged" Ritz values') + call pzvout (comm, logfil, np, workl(ritz), ndigit, + & '_naupd: The final Ritz values') +diff --git a/PARPACK/SRC/BLACS/pzneupd.f b/PARPACK/SRC/BLACS/pzneupd.f +index 1970e488..c2f508f2 100644 +--- a/PARPACK/SRC/BLACS/pzneupd.f ++++ b/PARPACK/SRC/BLACS/pzneupd.f +@@ -558,9 +558,9 @@ subroutine pzneupd + c %-----------------------------------------------------------% + c + if (msglvl .gt. 2) then +- call pivout(comm, logfil, 1, numcnv, ndigit, ++ call pivout(comm, logfil, 1, [numcnv], ndigit, + & '_neupd: Number of specified eigenvalues') +- call pivout(comm, logfil, 1, nconv, ndigit, ++ call pivout(comm, logfil, 1, [nconv], ndigit, + & '_neupd: Number of "converged" eigenvalues') + end if + c +diff --git a/PARPACK/SRC/BLACS/pzngets.f b/PARPACK/SRC/BLACS/pzngets.f +index 18e1518d..d880b4cc 100644 +--- a/PARPACK/SRC/BLACS/pzngets.f ++++ b/PARPACK/SRC/BLACS/pzngets.f +@@ -177,8 +177,8 @@ subroutine pzngets ( comm, ishift, which, kev, np, ritz, bounds) + tcgets = tcgets + (t1 - t0) + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, kev, ndigit, '_ngets: KEV is') +- call pivout (comm, logfil, 1, np, ndigit, '_ngets: NP is') ++ call pivout (comm, logfil, 1, [kev], ndigit, '_ngets: KEV is') ++ call pivout (comm, logfil, 1, [np], ndigit, '_ngets: NP is') + call pzvout (comm, logfil, kev+np, ritz, ndigit, + & '_ngets: Eigenvalues of current H matrix ') + call pzvout (comm, logfil, kev+np, bounds, ndigit, +diff --git a/PARPACK/SRC/MPI/pcgetv0.f b/PARPACK/SRC/MPI/pcgetv0.f +index aacfd046..72677a50 100644 +--- a/PARPACK/SRC/MPI/pcgetv0.f ++++ b/PARPACK/SRC/MPI/pcgetv0.f +@@ -185,7 +185,7 @@ subroutine pcgetv0 + save first, iseed, inits, iter, msglvl, orth, rnorm0 + c + Complex +- & cnorm_buf ++ & cnorm_buf, buf2(1) + c + c %----------------------% + c | External Subroutines | +@@ -332,8 +332,9 @@ subroutine pcgetv0 + first = .FALSE. + if (bmat .eq. 'G') then + cnorm_buf = cdotc (n, resid, 1, workd, 1) +- call MPI_ALLREDUCE( cnorm_buf, cnorm, 1, ++ call MPI_ALLREDUCE( [cnorm_buf], buf2, 1, + & MPI_COMPLEX, MPI_SUM, comm, ierr ) ++ cnorm = buf2(1) + rnorm0 = sqrt(slapy2(real (cnorm),aimag(cnorm))) + else if (bmat .eq. 'I') then + rnorm0 = pscnorm2( comm, n, resid, 1) +@@ -393,8 +394,9 @@ subroutine pcgetv0 + c + if (bmat .eq. 'G') then + cnorm_buf = cdotc (n, resid, 1, workd, 1) +- call MPI_ALLREDUCE( cnorm_buf, cnorm, 1, ++ call MPI_ALLREDUCE( [cnorm_buf], buf2, 1, + & MPI_COMPLEX, MPI_SUM, comm, ierr ) ++ cnorm = buf2(1) + rnorm = sqrt(slapy2(real (cnorm),aimag(cnorm))) + else if (bmat .eq. 'I') then + rnorm = pscnorm2(comm, n, resid, 1) +@@ -405,9 +407,9 @@ subroutine pcgetv0 + c %--------------------------------------% + c + if (msglvl .gt. 2) then +- call psvout (comm, logfil, 1, rnorm0, ndigit, ++ call psvout (comm, logfil, 1, [rnorm0], ndigit, + & '_getv0: re-orthonalization ; rnorm0 is') +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_getv0: re-orthonalization ; rnorm is') + end if + c +@@ -439,7 +441,7 @@ subroutine pcgetv0 + c + if (msglvl .gt. 0) then + cnorm2 = cmplx(rnorm,rzero) +- call pcvout (comm, logfil, 1, cnorm2, ndigit, ++ call pcvout (comm, logfil, 1, [cnorm2], ndigit, + & '_getv0: B-norm of initial / restarted starting vector') + end if + if (msglvl .gt. 2) then +diff --git a/PARPACK/SRC/MPI/pcnaitr.f b/PARPACK/SRC/MPI/pcnaitr.f +index 4a27960a..fe246ea0 100644 +--- a/PARPACK/SRC/MPI/pcnaitr.f ++++ b/PARPACK/SRC/MPI/pcnaitr.f +@@ -293,7 +293,7 @@ subroutine pcnaitr + & betaj, rnorm1, smlnum, ulp, unfl, wnorm + c + Complex +- & cnorm_buf ++ & cnorm_buf, buf2(1) + c + c %----------------------% + c | External Subroutines | +@@ -404,9 +404,9 @@ subroutine pcnaitr + 1000 continue + c + if (msglvl .gt. 1) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: generating Arnoldi vector number') +- call pcvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_naitr: B-norm of the current residual is') + end if + c +@@ -426,7 +426,7 @@ subroutine pcnaitr + c %---------------------------------------------------% + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: ****** RESTART AT STEP ******') + end if + c +@@ -577,8 +577,9 @@ subroutine pcnaitr + c + if (bmat .eq. 'G') then + cnorm_buf = cdotc (n, resid, 1, workd(ipj), 1) +- call MPI_ALLREDUCE( cnorm_buf, cnorm, 1, ++ call MPI_ALLREDUCE( [cnorm_buf], buf2, 1, + & MPI_COMPLEX, MPI_SUM, comm, ierr ) ++ cnorm = buf2(1) + wnorm = sqrt( slapy2(real(cnorm),aimag(cnorm)) ) + else if (bmat .eq. 'I') then + wnorm = pscnorm2(comm, n, resid, 1) +@@ -653,8 +654,9 @@ subroutine pcnaitr + c + if (bmat .eq. 'G') then + cnorm_buf = cdotc (n, resid, 1, workd(ipj), 1) +- call MPI_ALLREDUCE( cnorm_buf, cnorm, 1, ++ call MPI_ALLREDUCE( [cnorm_buf], buf2, 1, + & MPI_COMPLEX, MPI_SUM, comm, ierr ) ++ cnorm = buf2(1) + rnorm = sqrt( slapy2(real(cnorm),aimag(cnorm)) ) + else if (bmat .eq. 'I') then + rnorm = pscnorm2(comm, n, resid, 1) +@@ -757,15 +759,16 @@ subroutine pcnaitr + c + if (bmat .eq. 'G') then + cnorm_buf = cdotc (n, resid, 1, workd(ipj), 1) +- call MPI_ALLREDUCE( cnorm_buf, cnorm, 1, ++ call MPI_ALLREDUCE( [cnorm_buf], buf2, 1, + & MPI_COMPLEX, MPI_SUM, comm, ierr ) ++ cnorm = buf2(1) + rnorm1 = sqrt( slapy2(real(cnorm),aimag(cnorm)) ) + else if (bmat .eq. 'I') then + rnorm1 = pscnorm2(comm, n, resid, 1) + end if + c + if (msglvl .gt. 0 .and. iter .gt. 0 ) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: Iterative refinement for Arnoldi residual') + if (msglvl .gt. 2) then + rtemp(1) = rnorm +diff --git a/PARPACK/SRC/MPI/pcnapps.f b/PARPACK/SRC/MPI/pcnapps.f +index 76077b92..f17686d8 100644 +--- a/PARPACK/SRC/MPI/pcnapps.f ++++ b/PARPACK/SRC/MPI/pcnapps.f +@@ -283,9 +283,9 @@ subroutine pcnapps + sigma = shift(jj) + c + if (msglvl .gt. 2 ) then +- call pivout (comm, logfil, 1, jj, ndigit, ++ call pivout (comm, logfil, 1, [jj], ndigit, + & '_napps: shift number.') +- call pcvout (comm, logfil, 1, sigma, ndigit, ++ call pcvout (comm, logfil, 1, [sigma], ndigit, + & '_napps: Value of the shift ') + end if + c +@@ -306,9 +306,9 @@ subroutine pcnapps + if ( abs(real(h(i+1,i))) + & .le. max(ulp*tst1, smlnum) ) then + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, i, ndigit, ++ call pivout (comm, logfil, 1, [i], ndigit, + & '_napps: matrix splitting at row/column no.') +- call pivout (comm, logfil, 1, jj, ndigit, ++ call pivout (comm, logfil, 1, [jj], ndigit, + & '_napps: matrix splitting with shift number.') + call pcvout (comm, logfil, 1, h(i+1,i), ndigit, + & '_napps: off diagonal element.') +@@ -322,9 +322,9 @@ subroutine pcnapps + 40 continue + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, istart, ndigit, ++ call pivout (comm, logfil, 1, [istart], ndigit, + & '_napps: Start of current block ') +- call pivout (comm, logfil, 1, iend, ndigit, ++ call pivout (comm, logfil, 1, [iend], ndigit, + & '_napps: End of current block ') + end if + c +@@ -500,7 +500,7 @@ subroutine pcnapps + & '_napps: sigmak = (e_{kev+p}^T*Q)*e_{kev}') + call pcvout (comm, logfil, 1, h(kev+1,kev), ndigit, + & '_napps: betak = e_{kev+1}^T*H*e_{kev}') +- call pivout (comm, logfil, 1, kev, ndigit, ++ call pivout (comm, logfil, 1, [kev], ndigit, + & '_napps: Order of the final Hessenberg matrix ') + if (msglvl .gt. 2) then + call pcmout (comm, logfil, kev, kev, h, ldh, ndigit, +diff --git a/PARPACK/SRC/MPI/pcnaup2.f b/PARPACK/SRC/MPI/pcnaup2.f +index 91fe9293..4b0d6ceb 100644 +--- a/PARPACK/SRC/MPI/pcnaup2.f ++++ b/PARPACK/SRC/MPI/pcnaup2.f +@@ -237,7 +237,7 @@ subroutine pcnaup2 + & nevbef, nev0 , np0, eps23 + c + Real +- & cmpnorm_buf ++ & cmpnorm_buf, buf2(1) + c + c %-----------------------% + c | Local array arguments | +@@ -401,7 +401,7 @@ subroutine pcnaup2 + iter = iter + 1 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, iter, ndigit, ++ call pivout (comm, logfil, 1, [iter], ndigit, + & '_naup2: **** Start of major iteration number ****') + end if + c +@@ -414,9 +414,9 @@ subroutine pcnaup2 + np = kplusp - nev + c + if (msglvl .gt. 1) then +- call pivout (comm, logfil, 1, nev, ndigit, ++ call pivout (comm, logfil, 1, [nev], ndigit, + & '_naup2: The length of the current Arnoldi factorization') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naup2: Extend the Arnoldi factorization by') + end if + c +@@ -443,7 +443,7 @@ subroutine pcnaup2 + update = .false. + c + if (msglvl .gt. 1) then +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_naup2: Corresponding B-norm of the residual') + end if + c +@@ -674,7 +674,7 @@ subroutine pcnaup2 + end if + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, nconv, ndigit, ++ call pivout (comm, logfil, 1, [nconv], ndigit, + & '_naup2: no. of "converged" Ritz values at this iter.') + if (msglvl .gt. 1) then + kp(1) = nev +@@ -714,7 +714,7 @@ subroutine pcnaup2 + end if + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naup2: The number of shifts to apply ') + call pcvout (comm, logfil, np, ritz, ndigit, + & '_naup2: values of the shifts') +@@ -771,8 +771,9 @@ subroutine pcnaup2 + c + if (bmat .eq. 'G') then + cmpnorm_buf = cdotc (n, resid, 1, workd, 1) +- call MPI_ALLREDUCE( cmpnorm_buf, cmpnorm, 1, ++ call MPI_ALLREDUCE( [cmpnorm_buf], buf2, 1, + & MPI_COMPLEX, MPI_SUM, comm, ierr ) ++ cmpnorm = buf2(1) + rnorm = sqrt(slapy2(real(cmpnorm),aimag(cmpnorm))) + else if (bmat .eq. 'I') then + rnorm = pscnorm2(comm, n, resid, 1) +@@ -780,7 +781,7 @@ subroutine pcnaup2 + cnorm = .false. + c + if (msglvl .gt. 2) then +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_naup2: B-norm of residual for compressed factorization') + call pcmout (comm, logfil, nev, nev, h, ldh, ndigit, + & '_naup2: Compressed upper Hessenberg matrix H') +diff --git a/PARPACK/SRC/MPI/pcnaupd.f b/PARPACK/SRC/MPI/pcnaupd.f +index 5b8e1f5c..0bd6eb6c 100644 +--- a/PARPACK/SRC/MPI/pcnaupd.f ++++ b/PARPACK/SRC/MPI/pcnaupd.f +@@ -626,9 +626,9 @@ subroutine pcnaupd + if (info .eq. 2) info = 3 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, mxiter, ndigit, ++ call pivout (comm, logfil, 1, [mxiter], ndigit, + & '_naupd: Number of update iterations taken') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naupd: Number of wanted "converged" Ritz values') + call pcvout (comm, logfil, np, workl(ritz), ndigit, + & '_naupd: The final Ritz values') +diff --git a/PARPACK/SRC/MPI/pcneupd.f b/PARPACK/SRC/MPI/pcneupd.f +index 2566f8a5..8ced1df6 100644 +--- a/PARPACK/SRC/MPI/pcneupd.f ++++ b/PARPACK/SRC/MPI/pcneupd.f +@@ -558,9 +558,9 @@ subroutine pcneupd + c %-----------------------------------------------------------% + c + if (msglvl .gt. 2) then +- call pivout(comm, logfil, 1, numcnv, ndigit, ++ call pivout(comm, logfil, 1, [numcnv], ndigit, + & '_neupd: Number of specified eigenvalues') +- call pivout(comm, logfil, 1, nconv, ndigit, ++ call pivout(comm, logfil, 1, [nconv], ndigit, + & '_neupd: Number of "converged" eigenvalues') + end if + c +diff --git a/PARPACK/SRC/MPI/pcngets.f b/PARPACK/SRC/MPI/pcngets.f +index 34b9b047..5f672149 100644 +--- a/PARPACK/SRC/MPI/pcngets.f ++++ b/PARPACK/SRC/MPI/pcngets.f +@@ -177,8 +177,8 @@ subroutine pcngets ( comm, ishift, which, kev, np, ritz, bounds) + tcgets = tcgets + (t1 - t0) + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, kev, ndigit, '_ngets: KEV is') +- call pivout (comm, logfil, 1, np, ndigit, '_ngets: NP is') ++ call pivout (comm, logfil, 1, [kev], ndigit, '_ngets: KEV is') ++ call pivout (comm, logfil, 1, [np], ndigit, '_ngets: NP is') + call pcvout (comm, logfil, kev+np, ritz, ndigit, + & '_ngets: Eigenvalues of current H matrix ') + call pcvout (comm, logfil, kev+np, bounds, ndigit, +diff --git a/PARPACK/SRC/MPI/pdgetv0.f b/PARPACK/SRC/MPI/pdgetv0.f +index 3dc71c66..54ed850c 100644 +--- a/PARPACK/SRC/MPI/pdgetv0.f ++++ b/PARPACK/SRC/MPI/pdgetv0.f +@@ -180,7 +180,7 @@ subroutine pdgetv0 + logical first, inits, orth + integer idist, iseed(4), iter, msglvl, jj + Double precision +- & rnorm0 ++ & rnorm0, buf2(1) + save first, iseed, inits, iter, msglvl, orth, rnorm0 + c + Double precision +@@ -318,9 +318,9 @@ subroutine pdgetv0 + first = .FALSE. + if (bmat .eq. 'G') then + rnorm_buf = ddot (n, resid, 1, workd, 1) +- call MPI_ALLREDUCE( rnorm_buf, rnorm0, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], buf2, 1, + & MPI_DOUBLE_PRECISION, MPI_SUM, comm, ierr ) +- rnorm0 = sqrt(abs(rnorm0)) ++ rnorm0 = sqrt(abs(buf2(1))) + else if (bmat .eq. 'I') then + rnorm0 = pdnorm2( comm, n, resid, 1 ) + end if +@@ -379,9 +379,9 @@ subroutine pdgetv0 + c + if (bmat .eq. 'G') then + rnorm_buf = ddot (n, resid, 1, workd, 1) +- call MPI_ALLREDUCE( rnorm_buf, rnorm, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], buf2, 1, + & MPI_DOUBLE_PRECISION, MPI_SUM, comm, ierr ) +- rnorm = sqrt(abs(rnorm)) ++ rnorm = sqrt(abs(buf2(1))) + else if (bmat .eq. 'I') then + rnorm = pdnorm2( comm, n, resid, 1 ) + end if +@@ -391,9 +391,9 @@ subroutine pdgetv0 + c %--------------------------------------% + c + if (msglvl .gt. 2) then +- call pdvout (comm, logfil, 1, rnorm0, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm0], ndigit, + & '_getv0: re-orthonalization ; rnorm0 is') +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_getv0: re-orthonalization ; rnorm is') + end if + c +@@ -424,7 +424,7 @@ subroutine pdgetv0 + 50 continue + c + if (msglvl .gt. 0) then +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_getv0: B-norm of initial / restarted starting vector') + end if + if (msglvl .gt. 2) then +diff --git a/PARPACK/SRC/MPI/pdlamch10.f b/PARPACK/SRC/MPI/pdlamch10.f +index fabfcbd1..64cd6666 100644 +--- a/PARPACK/SRC/MPI/pdlamch10.f ++++ b/PARPACK/SRC/MPI/pdlamch10.f +@@ -57,7 +57,7 @@ DOUBLE PRECISION FUNCTION PDLAMCH10( ICTXT, CMACH ) + * + * .. Local Scalars .. + INTEGER IDUMM +- DOUBLE PRECISION TEMP, TEMP1 ++ DOUBLE PRECISION TEMP, TEMP1, buf2(1) + * .. + * .. External Subroutines .. + * EXTERNAL DGAMN2D, DGAMX2D +@@ -73,19 +73,20 @@ DOUBLE PRECISION FUNCTION PDLAMCH10( ICTXT, CMACH ) + * + IF( LSAME( CMACH, 'E' ).OR.LSAME( CMACH, 'S' ).OR. + $ LSAME( CMACH, 'M' ).OR.LSAME( CMACH, 'U' ) ) THEN +- CALL MPI_ALLREDUCE( TEMP1, TEMP, 1, MPI_DOUBLE_PRECISION, ++ CALL MPI_ALLREDUCE( [TEMP1], buf2, 1, MPI_DOUBLE_PRECISION, + $ MPI_MAX, ICTXT, IDUMM ) +-* CALL DGAMX2D( ICTXT, 'All', ' ', 1, 1, TEMP, 1, IDUMM, ++* CALL DGAMX2D( ICTXT, 'All', ' ', 1, 1, buf2(1), 1, IDUMM, + * $ IDUMM, 1, -1, IDUMM ) + ELSE IF( LSAME( CMACH, 'L' ).OR.LSAME( CMACH, 'O' ) ) THEN +- CALL MPI_ALLREDUCE( TEMP1, TEMP, 1, MPI_DOUBLE_PRECISION, ++ CALL MPI_ALLREDUCE( [TEMP1], buf2, 1, MPI_DOUBLE_PRECISION, + $ MPI_MIN, ICTXT, IDUMM ) +-* CALL DGAMN2D( ICTXT, 'All', ' ', 1, 1, TEMP, 1, IDUMM, ++* CALL DGAMN2D( ICTXT, 'All', ' ', 1, 1, buf2(1), 1, IDUMM, + * $ IDUMM, 1, -1, IDUMM ) + ELSE +- TEMP = TEMP1 ++ buf2(1) = TEMP1 + END IF + * ++ TEMP = buf2(1) + PDLAMCH10 = TEMP + * + * End of PDLAMCH10 +diff --git a/PARPACK/SRC/MPI/pdnaitr.f b/PARPACK/SRC/MPI/pdnaitr.f +index f531fe1e..3c71ff93 100644 +--- a/PARPACK/SRC/MPI/pdnaitr.f ++++ b/PARPACK/SRC/MPI/pdnaitr.f +@@ -276,7 +276,7 @@ subroutine pdnaitr + & betaj, rnorm1, smlnum, ulp, unfl, wnorm + c + Double precision +- & rnorm_buf ++ & rnorm_buf, buf2(1) + c + c + c %-----------------------% +@@ -393,9 +393,9 @@ subroutine pdnaitr + 1000 continue + c + if (msglvl .gt. 1) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: generating Arnoldi vector number') +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_naitr: B-norm of the current residual is') + end if + c +@@ -415,7 +415,7 @@ subroutine pdnaitr + c %---------------------------------------------------% + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: ****** RESTART AT STEP ******') + end if + c +@@ -566,9 +566,9 @@ subroutine pdnaitr + c + if (bmat .eq. 'G') then + rnorm_buf = ddot (n, resid, 1, workd(ipj), 1) +- call MPI_ALLREDUCE( rnorm_buf, wnorm, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], buf2, 1, + & MPI_DOUBLE_PRECISION, MPI_SUM, comm, ierr ) +- wnorm = sqrt(abs(wnorm)) ++ wnorm = sqrt(abs(buf2(1))) + else if (bmat .eq. 'I') then + wnorm = pdnorm2( comm, n, resid, 1 ) + end if +@@ -642,9 +642,9 @@ subroutine pdnaitr + c + if (bmat .eq. 'G') then + rnorm_buf = ddot (n, resid, 1, workd(ipj), 1) +- call MPI_ALLREDUCE( rnorm_buf, rnorm, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], buf2, 1, + & MPI_DOUBLE_PRECISION, MPI_SUM, comm, ierr ) +- rnorm = sqrt(abs(rnorm)) ++ rnorm = sqrt(abs(buf2(1))) + else if (bmat .eq. 'I') then + rnorm = pdnorm2( comm, n, resid, 1 ) + end if +@@ -745,15 +745,15 @@ subroutine pdnaitr + c + if (bmat .eq. 'G') then + rnorm_buf = ddot (n, resid, 1, workd(ipj), 1) +- call MPI_ALLREDUCE( rnorm_buf, rnorm1, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], buf2, 1, + & MPI_DOUBLE_PRECISION, MPI_SUM, comm, ierr ) +- rnorm1 = sqrt(abs(rnorm1)) ++ rnorm1 = sqrt(abs(buf2(1))) + else if (bmat .eq. 'I') then + rnorm1 = pdnorm2( comm, n, resid, 1 ) + end if + c + if (msglvl .gt. 0 .and. iter .gt. 0) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: Iterative refinement for Arnoldi residual') + if (msglvl .gt. 2) then + xtemp(1) = rnorm +diff --git a/PARPACK/SRC/MPI/pdnapps.f b/PARPACK/SRC/MPI/pdnapps.f +index fb6f153f..b6ac3502 100644 +--- a/PARPACK/SRC/MPI/pdnapps.f ++++ b/PARPACK/SRC/MPI/pdnapps.f +@@ -276,11 +276,11 @@ subroutine pdnapps + sigmai = shifti(jj) + c + if (msglvl .gt. 2 ) then +- call pivout (comm, logfil, 1, jj, ndigit, ++ call pivout (comm, logfil, 1, [jj], ndigit, + & '_napps: shift number.') +- call pdvout (comm, logfil, 1, sigmar, ndigit, ++ call pdvout (comm, logfil, 1, [sigmar], ndigit, + & '_napps: The real part of the shift ') +- call pdvout (comm, logfil, 1, sigmai, ndigit, ++ call pdvout (comm, logfil, 1, [sigmai], ndigit, + & '_napps: The imaginary part of the shift ') + end if + c +@@ -345,9 +345,9 @@ subroutine pdnapps + & tst1 = dlanhs( '1', kplusp-jj+1, h, ldh, workl ) + if( abs( h( i+1,i ) ).le.max( ulp*tst1, smlnum ) ) then + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, i, ndigit, ++ call pivout (comm, logfil, 1, [i], ndigit, + & '_napps: matrix splitting at row/column no.') +- call pivout (comm, logfil, 1, jj, ndigit, ++ call pivout (comm, logfil, 1, [jj], ndigit, + & '_napps: matrix splitting with shift number.') + call pdvout (comm, logfil, 1, h(i+1,i), ndigit, + & '_napps: off diagonal element.') +@@ -361,9 +361,9 @@ subroutine pdnapps + 40 continue + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, istart, ndigit, ++ call pivout (comm, logfil, 1, [istart], ndigit, + & '_napps: Start of current block ') +- call pivout (comm, logfil, 1, iend, ndigit, ++ call pivout (comm, logfil, 1, [iend], ndigit, + & '_napps: End of current block ') + end if + c +@@ -635,7 +635,7 @@ subroutine pdnapps + & '_napps: sigmak = (e_{kev+p}^T*Q)*e_{kev}') + call pdvout (comm, logfil, 1, h(kev+1,kev), ndigit, + & '_napps: betak = e_{kev+1}^T*H*e_{kev}') +- call pivout (comm, logfil, 1, kev, ndigit, ++ call pivout (comm, logfil, 1, [kev], ndigit, + & '_napps: Order of the final Hessenberg matrix ') + if (msglvl .gt. 2) then + call pdmout (comm, logfil, kev, kev, h, ldh, ndigit, +diff --git a/PARPACK/SRC/MPI/pdnaup2.f b/PARPACK/SRC/MPI/pdnaup2.f +index d0f73738..c265380a 100644 +--- a/PARPACK/SRC/MPI/pdnaup2.f ++++ b/PARPACK/SRC/MPI/pdnaup2.f +@@ -234,7 +234,7 @@ subroutine pdnaup2 + & nevbef, nev0 , np0 , nptemp, numcnv, + & j + Double precision +- & rnorm , temp , eps23 ++ & rnorm , temp , eps23, buf2(1) + save cnorm , getv0, initv , update, ushift, + & rnorm , iter , kplusp, msglvl, nconv, + & nevbef, nev0 , np0 , eps23 , numcnv +@@ -408,7 +408,7 @@ subroutine pdnaup2 + iter = iter + 1 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, iter, ndigit, ++ call pivout (comm, logfil, 1, [iter], ndigit, + & '_naup2: **** Start of major iteration number ****') + end if + c +@@ -421,9 +421,9 @@ subroutine pdnaup2 + np = kplusp - nev + c + if (msglvl .gt. 1) then +- call pivout (comm, logfil, 1, nev, ndigit, ++ call pivout (comm, logfil, 1, [nev], ndigit, + & '_naup2: The length of the current Arnoldi factorization') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naup2: Extend the Arnoldi factorization by') + end if + c +@@ -455,7 +455,7 @@ subroutine pdnaup2 + update = .false. + c + if (msglvl .gt. 1) then +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_naup2: Corresponding B-norm of the residual') + end if + c +@@ -699,7 +699,7 @@ subroutine pdnaup2 + end if + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, nconv, ndigit, ++ call pivout (comm, logfil, 1, [nconv], ndigit, + & '_naup2: no. of "converged" Ritz values at this iter.') + if (msglvl .gt. 1) then + kp(1) = nev +@@ -751,7 +751,7 @@ subroutine pdnaup2 + end if + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naup2: The number of shifts to apply ') + call pdvout (comm, logfil, np, ritzr, ndigit, + & '_naup2: Real part of the shifts') +@@ -810,16 +810,16 @@ subroutine pdnaup2 + c + if (bmat .eq. 'G') then + rnorm_buf = ddot (n, resid, 1, workd, 1) +- call MPI_ALLREDUCE( rnorm_buf, rnorm, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], buf2, 1, + & MPI_DOUBLE_PRECISION , MPI_SUM, comm, ierr ) +- rnorm = sqrt(abs(rnorm)) ++ rnorm = sqrt(abs(buf2(1))) + else if (bmat .eq. 'I') then + rnorm = pdnorm2 ( comm, n, resid, 1 ) + end if + cnorm = .false. + c + if (msglvl .gt. 2) then +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_naup2: B-norm of residual for compressed factorization') + call pdmout (comm, logfil, nev, nev, h, ldh, ndigit, + & '_naup2: Compressed upper Hessenberg matrix H') +diff --git a/PARPACK/SRC/MPI/pdnaupd.f b/PARPACK/SRC/MPI/pdnaupd.f +index 1bf0fc12..231f9cab 100644 +--- a/PARPACK/SRC/MPI/pdnaupd.f ++++ b/PARPACK/SRC/MPI/pdnaupd.f +@@ -650,9 +650,9 @@ subroutine pdnaupd + if (info .eq. 2) info = 3 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, mxiter, ndigit, ++ call pivout (comm, logfil, 1, [mxiter], ndigit, + & '_naupd: Number of update iterations taken') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naupd: Number of wanted "converged" Ritz values') + call pdvout (comm, logfil, np, workl(ritzr), ndigit, + & '_naupd: Real part of the final Ritz values') +diff --git a/PARPACK/SRC/MPI/pdneupd.f b/PARPACK/SRC/MPI/pdneupd.f +index f80651e2..d5741956 100644 +--- a/PARPACK/SRC/MPI/pdneupd.f ++++ b/PARPACK/SRC/MPI/pdneupd.f +@@ -617,9 +617,9 @@ subroutine pdneupd + c %-----------------------------------------------------------% + c + if (msglvl .gt. 2) then +- call pivout(comm, logfil, 1, numcnv, ndigit, ++ call pivout(comm, logfil, 1, [numcnv], ndigit, + & '_neupd: Number of specified eigenvalues') +- call pivout(comm, logfil, 1, nconv, ndigit, ++ call pivout(comm, logfil, 1, [nconv], ndigit, + & '_neupd: Number of "converged" eigenvalues') + end if + c +diff --git a/PARPACK/SRC/MPI/pdngets.f b/PARPACK/SRC/MPI/pdngets.f +index 4ff35165..c4294b8c 100644 +--- a/PARPACK/SRC/MPI/pdngets.f ++++ b/PARPACK/SRC/MPI/pdngets.f +@@ -226,8 +226,8 @@ subroutine pdngets + tngets = tngets + (t1 - t0) + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, kev, ndigit, '_ngets: KEV is') +- call pivout (comm, logfil, 1, np, ndigit, '_ngets: NP is') ++ call pivout (comm, logfil, 1, [kev], ndigit, '_ngets: KEV is') ++ call pivout (comm, logfil, 1, [np], ndigit, '_ngets: NP is') + call pdvout (comm, logfil, kev+np, ritzr, ndigit, + & '_ngets: Eigenvalues of current H matrix -- real part') + call pdvout (comm, logfil, kev+np, ritzi, ndigit, +diff --git a/PARPACK/SRC/MPI/pdnorm2.f b/PARPACK/SRC/MPI/pdnorm2.f +index 0e149585..5061d3d9 100644 +--- a/PARPACK/SRC/MPI/pdnorm2.f ++++ b/PARPACK/SRC/MPI/pdnorm2.f +@@ -45,7 +45,7 @@ Double precision function pdnorm2 ( comm, n, x, inc ) + c %---------------% + c + Double precision +- & max, buf, zero ++ & max, buf, zero, buf2(1) + parameter ( zero = 0.0 ) + c + c %---------------------% +@@ -69,15 +69,16 @@ Double precision function pdnorm2 ( comm, n, x, inc ) + pdnorm2 = dnrm2( n, x, inc) + c + buf = pdnorm2 +- call MPI_ALLREDUCE( buf, max, 1, MPI_DOUBLE_PRECISION, ++ call MPI_ALLREDUCE( [buf], buf2, 1, MPI_DOUBLE_PRECISION, + & MPI_MAX, comm, ierr ) ++ max = buf2(1) + if ( max .eq. zero ) then + pdnorm2 = zero + else + buf = (pdnorm2/max)**2.0 +- call MPI_ALLREDUCE( buf, pdnorm2, 1, MPI_DOUBLE_PRECISION, ++ call MPI_ALLREDUCE( [buf], buf2, 1, MPI_DOUBLE_PRECISION, + & MPI_SUM, comm, ierr ) +- pdnorm2 = max * sqrt(abs(pdnorm2)) ++ pdnorm2 = max * sqrt(abs(buf2(1))) + endif + c + c %----------------% +diff --git a/PARPACK/SRC/MPI/pdsaitr.f b/PARPACK/SRC/MPI/pdsaitr.f +index 5fe84b81..42396c11 100644 +--- a/PARPACK/SRC/MPI/pdsaitr.f ++++ b/PARPACK/SRC/MPI/pdsaitr.f +@@ -264,7 +264,7 @@ subroutine pdsaitr + integer i, ierr, ipj, irj, ivj, iter, itry, j, msglvl, infol, + & jj + Double precision +- & rnorm1, wnorm, safmin, temp1 ++ & rnorm1, wnorm, safmin, temp1, buf2(1) + save orth1, orth2, rstart, step3, step4, + & ierr, ipj, irj, ivj, iter, itry, j, msglvl, + & rnorm1, safmin, wnorm +@@ -392,9 +392,9 @@ subroutine pdsaitr + 1000 continue + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_saitr: generating Arnoldi vector no.') +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_saitr: B-norm of the current residual =') + end if + c +@@ -412,7 +412,7 @@ subroutine pdsaitr + c %---------------------------------------------------% + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_saitr: ****** restart at step ******') + end if + c +@@ -572,14 +572,14 @@ subroutine pdsaitr + c %----------------------------------% + c + rnorm_buf = ddot (n, resid, 1, workd(ivj), 1) +- call MPI_ALLREDUCE( rnorm_buf, wnorm, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], buf2, 1, + & MPI_DOUBLE_PRECISION, MPI_SUM, comm, ierr ) +- wnorm = sqrt(abs(wnorm)) ++ wnorm = sqrt(abs(buf2(1))) + else if (bmat .eq. 'G') then + rnorm_buf = ddot (n, resid, 1, workd(ipj), 1) +- call MPI_ALLREDUCE( rnorm_buf, wnorm, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], buf2, 1, + & MPI_DOUBLE_PRECISION, MPI_SUM, comm, ierr ) +- wnorm = sqrt(abs(wnorm)) ++ wnorm = sqrt(abs(buf2(1))) + else if (bmat .eq. 'I') then + wnorm = pdnorm2( comm, n, resid, 1 ) + end if +@@ -669,9 +669,9 @@ subroutine pdsaitr + c + if (bmat .eq. 'G') then + rnorm_buf = ddot (n, resid, 1, workd(ipj), 1) +- call MPI_ALLREDUCE( rnorm_buf, rnorm, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], buf2, 1, + & MPI_DOUBLE_PRECISION, MPI_SUM, comm, ierr ) +- rnorm = sqrt(abs(rnorm)) ++ rnorm = sqrt(abs(buf2(1))) + else if (bmat .eq. 'I') then + rnorm = pdnorm2( comm, n, resid, 1 ) + end if +@@ -769,15 +769,15 @@ subroutine pdsaitr + c + if (bmat .eq. 'G') then + rnorm_buf = ddot (n, resid, 1, workd(ipj), 1) +- call MPI_ALLREDUCE( rnorm_buf, rnorm1, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], buf2(1), 1, + & MPI_DOUBLE_PRECISION, MPI_SUM, comm, ierr ) +- rnorm1 = sqrt(abs(rnorm1)) ++ rnorm1 = sqrt(abs(buf2(1))) + else if (bmat .eq. 'I') then + rnorm1 = pdnorm2( comm, n, resid, 1 ) + end if + c + if (msglvl .gt. 0 .and. iter .gt. 0) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: Iterative refinement for Arnoldi residual') + if (msglvl .gt. 2) then + xtemp(1) = rnorm +diff --git a/PARPACK/SRC/MPI/pdsapps.f b/PARPACK/SRC/MPI/pdsapps.f +index e79db4e3..3feb8da0 100644 +--- a/PARPACK/SRC/MPI/pdsapps.f ++++ b/PARPACK/SRC/MPI/pdsapps.f +@@ -272,9 +272,9 @@ subroutine pdsapps + big = abs(h(i,2)) + abs(h(i+1,2)) + if (h(i+1,1) .le. epsmch*big) then + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, i, ndigit, ++ call pivout (comm, logfil, 1, [i], ndigit, + & '_sapps: deflation at row/column no.') +- call pivout (comm, logfil, 1, jj, ndigit, ++ call pivout (comm, logfil, 1, [jj], ndigit, + & '_sapps: occurred before shift number.') + call pdvout (comm, logfil, 1, h(i+1,1), ndigit, + & '_sapps: the corresponding off diagonal element') +@@ -443,7 +443,7 @@ subroutine pdsapps + big = abs(h(i,2)) + abs(h(i+1,2)) + if (h(i+1,1) .le. epsmch*big) then + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, i, ndigit, ++ call pivout (comm, logfil, 1, [i], ndigit, + & '_sapps: deflation at row/column no.') + call pdvout (comm, logfil, 1, h(i+1,1), ndigit, + & '_sapps: the corresponding off diagonal element') +diff --git a/PARPACK/SRC/MPI/pdsaup2.f b/PARPACK/SRC/MPI/pdsaup2.f +index 820459b3..a575b33f 100644 +--- a/PARPACK/SRC/MPI/pdsaup2.f ++++ b/PARPACK/SRC/MPI/pdsaup2.f +@@ -212,7 +212,7 @@ subroutine pdsaup2 + integer ido, info, ishift, iupd, ldh, ldq, ldv, mxiter, + & n, mode, nev, np + Double precision +- & tol ++ & tol, buf2(1) + c + c %-----------------% + c | Array Arguments | +@@ -424,13 +424,13 @@ subroutine pdsaup2 + iter = iter + 1 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, iter, ndigit, ++ call pivout (comm, logfil, 1, [iter], ndigit, + & '_saup2: **** Start of major iteration number ****') + end if + if (msglvl .gt. 1) then +- call pivout (comm, logfil, 1, nev, ndigit, ++ call pivout (comm, logfil, 1, [nev], ndigit, + & '_saup2: The length of the current Lanczos factorization') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_saup2: Extend the Lanczos factorization by') + end if + c +@@ -469,7 +469,7 @@ subroutine pdsaup2 + update = .false. + c + if (msglvl .gt. 1) then +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_saup2: Current B-norm of residual for factorization') + end if + c +@@ -719,7 +719,7 @@ subroutine pdsaup2 + end if + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, nconv, ndigit, ++ call pivout (comm, logfil, 1, [nconv], ndigit, + & '_saup2: no. of "converged" Ritz values at this iter.') + if (msglvl .gt. 1) then + kp(1) = nev +@@ -766,7 +766,7 @@ subroutine pdsaup2 + if (ishift .eq. 0) call dcopy (np, workl, 1, ritz, 1) + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_saup2: The number of shifts to apply ') + call pdvout (comm, logfil, np, workl, ndigit, + & '_saup2: shifts selected') +@@ -825,9 +825,9 @@ subroutine pdsaup2 + c + if (bmat .eq. 'G') then + rnorm_buf = ddot (n, resid, 1, workd, 1) +- call MPI_ALLREDUCE( rnorm_buf, rnorm, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], buf2, 1, + & MPI_DOUBLE_PRECISION, MPI_SUM, comm, ierr ) +- rnorm = sqrt(abs(rnorm)) ++ rnorm = sqrt(abs(buf2(1))) + else if (bmat .eq. 'I') then + rnorm = pdnorm2( comm, n, resid, 1 ) + end if +@@ -835,7 +835,7 @@ subroutine pdsaup2 + 130 continue + c + if (msglvl .gt. 2) then +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_saup2: B-norm of residual for NEV factorization') + call pdvout (comm, logfil, nev, h(1,2), ndigit, + & '_saup2: main diagonal of compressed H matrix') +diff --git a/PARPACK/SRC/MPI/pdsaupd.f b/PARPACK/SRC/MPI/pdsaupd.f +index 47d05227..f505dc90 100644 +--- a/PARPACK/SRC/MPI/pdsaupd.f ++++ b/PARPACK/SRC/MPI/pdsaupd.f +@@ -652,9 +652,9 @@ subroutine pdsaupd + if (info .eq. 2) info = 3 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, mxiter, ndigit, ++ call pivout (comm, logfil, 1, [mxiter], ndigit, + & '_saupd: number of update iterations taken') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_saupd: number of "converged" Ritz values') + call pdvout (comm, logfil, np, workl(Ritz), ndigit, + & '_saupd: final Ritz values') +diff --git a/PARPACK/SRC/MPI/pdseupd.f b/PARPACK/SRC/MPI/pdseupd.f +index 9e866ab7..66e85995 100644 +--- a/PARPACK/SRC/MPI/pdseupd.f ++++ b/PARPACK/SRC/MPI/pdseupd.f +@@ -523,9 +523,9 @@ subroutine pdseupd + c %-----------------------------------------------------------% + c + if (msglvl .gt. 2) then +- call pivout(comm, logfil, 1, numcnv, ndigit, ++ call pivout(comm, logfil, 1, [numcnv], ndigit, + & '_neupd: Number of specified eigenvalues') +- call pivout(comm, logfil, 1, nconv, ndigit, ++ call pivout(comm, logfil, 1, [nconv], ndigit, + & '_neupd: Number of "converged" eigenvalues') + end if + c +diff --git a/PARPACK/SRC/MPI/pdsgets.f b/PARPACK/SRC/MPI/pdsgets.f +index 01b52394..c71421b4 100644 +--- a/PARPACK/SRC/MPI/pdsgets.f ++++ b/PARPACK/SRC/MPI/pdsgets.f +@@ -216,8 +216,8 @@ subroutine pdsgets + tsgets = tsgets + (t1 - t0) + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, kev, ndigit, '_sgets: KEV is') +- call pivout (comm, logfil, 1, np, ndigit, '_sgets: NP is') ++ call pivout (comm, logfil, 1, [kev], ndigit, '_sgets: KEV is') ++ call pivout (comm, logfil, 1, [np], ndigit, '_sgets: NP is') + call pdvout (comm, logfil, kev+np, ritz, ndigit, + & '_sgets: Eigenvalues of current H matrix') + call pdvout (comm, logfil, kev+np, bounds, ndigit, +diff --git a/PARPACK/SRC/MPI/pdznorm2.f b/PARPACK/SRC/MPI/pdznorm2.f +index 7a7173f3..4651679b 100644 +--- a/PARPACK/SRC/MPI/pdznorm2.f ++++ b/PARPACK/SRC/MPI/pdznorm2.f +@@ -45,7 +45,7 @@ Double precision function pdznorm2 ( comm, n, x, inc ) + c %---------------% + c + Double precision +- & max, buf, zero ++ & max(1), buf, zero + parameter ( zero = 0.0 ) + c + c %---------------------% +@@ -59,7 +59,7 @@ Double precision function pdznorm2 ( comm, n, x, inc ) + c %--------------------% + c + Double precision +- & dznrm2 ++ & dznrm2, buf2(1) + External dznrm2 + c + c %-----------------------% +@@ -69,15 +69,15 @@ Double precision function pdznorm2 ( comm, n, x, inc ) + pdznorm2 = dznrm2( n, x, inc) + c + buf = pdznorm2 +- call MPI_ALLREDUCE( buf, max, 1, MPI_DOUBLE_PRECISION, ++ call MPI_ALLREDUCE( [buf], max, 1, MPI_DOUBLE_PRECISION, + & MPI_MAX, comm, ierr ) +- if ( max .eq. zero ) then ++ if ( max(1) .eq. zero ) then + pdznorm2 = zero + else +- buf = (pdznorm2/max)**2.0 +- call MPI_ALLREDUCE( buf, pdznorm2, 1, MPI_DOUBLE_PRECISION, ++ buf = (pdznorm2/max(1))**2.0 ++ call MPI_ALLREDUCE( [buf], buf2, 1, MPI_DOUBLE_PRECISION, + & MPI_SUM, comm, ierr ) +- pdznorm2 = max * sqrt(abs(pdznorm2)) ++ pdznorm2 = max(1) * sqrt(abs(buf2(1))) + endif + c + c %-----------------% +diff --git a/PARPACK/SRC/MPI/pscnorm2.f b/PARPACK/SRC/MPI/pscnorm2.f +index 2c64831e..50dea8bd 100644 +--- a/PARPACK/SRC/MPI/pscnorm2.f ++++ b/PARPACK/SRC/MPI/pscnorm2.f +@@ -45,7 +45,7 @@ Real function pscnorm2 ( comm, n, x, inc ) + c %---------------% + c + Real +- & max, buf, zero ++ & max(1), buf, zero + parameter ( zero = 0.0 ) + c + c %---------------------% +@@ -61,6 +61,7 @@ Real function pscnorm2 ( comm, n, x, inc ) + Real + & scnrm2 + External scnrm2 ++ Real buf2(1) + c + c %-----------------------% + c | Executable Statements | +@@ -69,15 +70,15 @@ Real function pscnorm2 ( comm, n, x, inc ) + pscnorm2 = scnrm2( n, x, inc) + c + buf = pscnorm2 +- call MPI_ALLREDUCE( buf, max, 1, MPI_REAL, ++ call MPI_ALLREDUCE( [buf], max, 1, MPI_REAL, + & MPI_MAX, comm, ierr ) +- if ( max .eq. zero ) then ++ if ( max(1) .eq. zero ) then + pscnorm2 = zero + else +- buf = (pscnorm2/max)**2.0 +- call MPI_ALLREDUCE( buf, pscnorm2, 1, MPI_REAL, ++ buf = (pscnorm2/max(1))**2.0 ++ call MPI_ALLREDUCE( [buf], buf2, 1, MPI_REAL, + & MPI_SUM, comm, ierr ) +- pscnorm2 = max * sqrt(abs(pscnorm2)) ++ pscnorm2 = max(1) * sqrt(abs(buf2(1))) + endif + c + c %-----------------% +diff --git a/PARPACK/SRC/MPI/psgetv0.f b/PARPACK/SRC/MPI/psgetv0.f +index a721c9b9..597212b5 100644 +--- a/PARPACK/SRC/MPI/psgetv0.f ++++ b/PARPACK/SRC/MPI/psgetv0.f +@@ -163,7 +163,7 @@ subroutine psgetv0 + c + integer ipntr(3) + Real +- & resid(n), v(ldv,j), workd(2*n), workl(2*j) ++ & resid(n), v(ldv,j), workd(2*n), workl(2*j), buf2(1) + c + c %------------% + c | Parameters | +@@ -318,9 +318,9 @@ subroutine psgetv0 + first = .FALSE. + if (bmat .eq. 'G') then + rnorm_buf = sdot (n, resid, 1, workd, 1) +- call MPI_ALLREDUCE( rnorm_buf, rnorm0, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], buf2, 1, + & MPI_REAL, MPI_SUM, comm, ierr ) +- rnorm0 = sqrt(abs(rnorm0)) ++ rnorm0 = sqrt(abs(buf2(1))) + else if (bmat .eq. 'I') then + rnorm0 = psnorm2( comm, n, resid, 1 ) + end if +@@ -379,9 +379,9 @@ subroutine psgetv0 + c + if (bmat .eq. 'G') then + rnorm_buf = sdot (n, resid, 1, workd, 1) +- call MPI_ALLREDUCE( rnorm_buf, rnorm, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], buf2, 1, + & MPI_REAL, MPI_SUM, comm, ierr ) +- rnorm = sqrt(abs(rnorm)) ++ rnorm = sqrt(abs(buf2(1))) + else if (bmat .eq. 'I') then + rnorm = psnorm2( comm, n, resid, 1 ) + end if +@@ -391,9 +391,9 @@ subroutine psgetv0 + c %--------------------------------------% + c + if (msglvl .gt. 2) then +- call psvout (comm, logfil, 1, rnorm0, ndigit, ++ call psvout (comm, logfil, 1, [rnorm0], ndigit, + & '_getv0: re-orthonalization ; rnorm0 is') +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_getv0: re-orthonalization ; rnorm is') + end if + c +@@ -424,7 +424,7 @@ subroutine psgetv0 + 50 continue + c + if (msglvl .gt. 0) then +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_getv0: B-norm of initial / restarted starting vector') + end if + if (msglvl .gt. 2) then +diff --git a/PARPACK/SRC/MPI/pslamch10.f b/PARPACK/SRC/MPI/pslamch10.f +index cd7215c5..c3c13b8e 100644 +--- a/PARPACK/SRC/MPI/pslamch10.f ++++ b/PARPACK/SRC/MPI/pslamch10.f +@@ -53,7 +53,7 @@ REAL FUNCTION PSLAMCH10( ICTXT, CMACH ) + * + * .. Local Scalars .. + INTEGER IDUMM +- REAL TEMP, TEMP1 ++ REAL TEMP, TEMP1, buf2(1) + * .. + * .. External Subroutines .. + * EXTERNAL SGAMN2D, SGAMX2D +@@ -69,14 +69,16 @@ REAL FUNCTION PSLAMCH10( ICTXT, CMACH ) + * + IF( LSAME( CMACH, 'E' ).OR.LSAME( CMACH, 'S' ).OR. + $ LSAME( CMACH, 'M' ).OR.LSAME( CMACH, 'U' ) ) THEN +- CALL MPI_ALLREDUCE( TEMP1, TEMP, 1, MPI_REAL, ++ CALL MPI_ALLREDUCE( [TEMP1], buf2, 1, MPI_REAL, + $ MPI_MAX, ICTXT, IDUMM ) +-* CALL SGAMX2D( ICTXT, 'All', ' ', 1, 1, TEMP, 1, IDUMM, ++ TEMP = buf2(1) ++* CALL SGAMX2D( ICTXT, 'All', ' ', 1, 1, TEMP, 1, IDUMM, + * $ IDUMM, 1, -1, IDUMM ) + ELSE IF( LSAME( CMACH, 'L' ).OR.LSAME( CMACH, 'O' ) ) THEN +- CALL MPI_ALLREDUCE( TEMP1, TEMP, 1, MPI_REAL, ++ CALL MPI_ALLREDUCE( [TEMP1], buf2, 1, MPI_REAL, + $ MPI_MIN, ICTXT, IDUMM ) +-* CALL SGAMN2D( ICTXT, 'All', ' ', 1, 1, TEMP, 1, IDUMM, ++ TEMP = buf2(1) ++* CALL SGAMN2D( ICTXT, 'All', ' ', 1, 1, TEMP, 1, IDUMM, + * $ IDUMM, 1, -1, IDUMM ) + ELSE + TEMP = TEMP1 +diff --git a/PARPACK/SRC/MPI/psnaitr.f b/PARPACK/SRC/MPI/psnaitr.f +index ab63f99a..8b5f635d 100644 +--- a/PARPACK/SRC/MPI/psnaitr.f ++++ b/PARPACK/SRC/MPI/psnaitr.f +@@ -276,7 +276,7 @@ subroutine psnaitr + & betaj, rnorm1, smlnum, ulp, unfl, wnorm + c + Real +- & rnorm_buf ++ & rnorm_buf, buf2(1) + c + c + c %-----------------------% +@@ -393,9 +393,9 @@ subroutine psnaitr + 1000 continue + c + if (msglvl .gt. 1) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: generating Arnoldi vector number') +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_naitr: B-norm of the current residual is') + end if + c +@@ -415,7 +415,7 @@ subroutine psnaitr + c %---------------------------------------------------% + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: ****** RESTART AT STEP ******') + end if + c +@@ -566,9 +566,9 @@ subroutine psnaitr + c + if (bmat .eq. 'G') then + rnorm_buf = sdot (n, resid, 1, workd(ipj), 1) +- call MPI_ALLREDUCE( rnorm_buf, wnorm, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], buf2, 1, + & MPI_REAL, MPI_SUM, comm, ierr ) +- wnorm = sqrt(abs(wnorm)) ++ wnorm = sqrt(abs(buf2(1))) + else if (bmat .eq. 'I') then + wnorm = psnorm2( comm, n, resid, 1 ) + end if +@@ -642,9 +642,9 @@ subroutine psnaitr + c + if (bmat .eq. 'G') then + rnorm_buf = sdot (n, resid, 1, workd(ipj), 1) +- call MPI_ALLREDUCE( rnorm_buf, rnorm, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], buf2, 1, + & MPI_REAL, MPI_SUM, comm, ierr ) +- rnorm = sqrt(abs(rnorm)) ++ rnorm = sqrt(abs(buf2(1))) + else if (bmat .eq. 'I') then + rnorm = psnorm2( comm, n, resid, 1 ) + end if +@@ -745,15 +745,15 @@ subroutine psnaitr + c + if (bmat .eq. 'G') then + rnorm_buf = sdot (n, resid, 1, workd(ipj), 1) +- call MPI_ALLREDUCE( rnorm_buf, rnorm1, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], buf2, 1, + & MPI_REAL, MPI_SUM, comm, ierr ) +- rnorm1 = sqrt(abs(rnorm1)) ++ rnorm1 = sqrt(abs(buf2(1))) + else if (bmat .eq. 'I') then + rnorm1 = psnorm2( comm, n, resid, 1 ) + end if + c + if (msglvl .gt. 0 .and. iter .gt. 0) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: Iterative refinement for Arnoldi residual') + if (msglvl .gt. 2) then + xtemp(1) = rnorm +diff --git a/PARPACK/SRC/MPI/psnapps.f b/PARPACK/SRC/MPI/psnapps.f +index c06fba58..b6f8645b 100644 +--- a/PARPACK/SRC/MPI/psnapps.f ++++ b/PARPACK/SRC/MPI/psnapps.f +@@ -276,11 +276,11 @@ subroutine psnapps + sigmai = shifti(jj) + c + if (msglvl .gt. 2 ) then +- call pivout (comm, logfil, 1, jj, ndigit, ++ call pivout (comm, logfil, 1, [jj], ndigit, + & '_napps: shift number.') +- call psvout (comm, logfil, 1, sigmar, ndigit, ++ call psvout (comm, logfil, 1, [sigmar], ndigit, + & '_napps: The real part of the shift ') +- call psvout (comm, logfil, 1, sigmai, ndigit, ++ call psvout (comm, logfil, 1, [sigmai], ndigit, + & '_napps: The imaginary part of the shift ') + end if + c +@@ -345,9 +345,9 @@ subroutine psnapps + & tst1 = slanhs( '1', kplusp-jj+1, h, ldh, workl ) + if( abs( h( i+1,i ) ).le.max( ulp*tst1, smlnum ) ) then + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, i, ndigit, ++ call pivout (comm, logfil, 1, [i], ndigit, + & '_napps: matrix splitting at row/column no.') +- call pivout (comm, logfil, 1, jj, ndigit, ++ call pivout (comm, logfil, 1, [jj], ndigit, + & '_napps: matrix splitting with shift number.') + call psvout (comm, logfil, 1, h(i+1,i), ndigit, + & '_napps: off diagonal element.') +@@ -361,9 +361,9 @@ subroutine psnapps + 40 continue + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, istart, ndigit, ++ call pivout (comm, logfil, 1, [istart], ndigit, + & '_napps: Start of current block ') +- call pivout (comm, logfil, 1, iend, ndigit, ++ call pivout (comm, logfil, 1, [iend], ndigit, + & '_napps: End of current block ') + end if + c +@@ -635,7 +635,7 @@ subroutine psnapps + & '_napps: sigmak = (e_{kev+p}^T*Q)*e_{kev}') + call psvout (comm, logfil, 1, h(kev+1,kev), ndigit, + & '_napps: betak = e_{kev+1}^T*H*e_{kev}') +- call pivout (comm, logfil, 1, kev, ndigit, ++ call pivout (comm, logfil, 1, [kev], ndigit, + & '_napps: Order of the final Hessenberg matrix ') + if (msglvl .gt. 2) then + call psmout (comm, logfil, kev, kev, h, ldh, ndigit, +diff --git a/PARPACK/SRC/MPI/psnaup2.f b/PARPACK/SRC/MPI/psnaup2.f +index 6195d4b6..bfca7fb1 100644 +--- a/PARPACK/SRC/MPI/psnaup2.f ++++ b/PARPACK/SRC/MPI/psnaup2.f +@@ -241,7 +241,7 @@ subroutine psnaup2 + c + + Real +- & rnorm_buf ++ & rnorm_buf, buf2(1) + c + c %-----------------------% + c | Local array arguments | +@@ -408,7 +408,7 @@ subroutine psnaup2 + iter = iter + 1 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, iter, ndigit, ++ call pivout (comm, logfil, 1, [iter], ndigit, + & '_naup2: **** Start of major iteration number ****') + end if + c +@@ -421,9 +421,9 @@ subroutine psnaup2 + np = kplusp - nev + c + if (msglvl .gt. 1) then +- call pivout (comm, logfil, 1, nev, ndigit, ++ call pivout (comm, logfil, 1, [nev], ndigit, + & '_naup2: The length of the current Arnoldi factorization') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naup2: Extend the Arnoldi factorization by') + end if + c +@@ -455,7 +455,7 @@ subroutine psnaup2 + update = .false. + c + if (msglvl .gt. 1) then +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_naup2: Corresponding B-norm of the residual') + end if + c +@@ -699,7 +699,7 @@ subroutine psnaup2 + end if + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, nconv, ndigit, ++ call pivout (comm, logfil, 1, [nconv], ndigit, + & '_naup2: no. of "converged" Ritz values at this iter.') + if (msglvl .gt. 1) then + kp(1) = nev +@@ -751,7 +751,7 @@ subroutine psnaup2 + end if + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naup2: The number of shifts to apply ') + call psvout (comm, logfil, np, ritzr, ndigit, + & '_naup2: Real part of the shifts') +@@ -810,16 +810,16 @@ subroutine psnaup2 + c + if (bmat .eq. 'G') then + rnorm_buf = sdot (n, resid, 1, workd, 1) +- call MPI_ALLREDUCE( rnorm_buf, rnorm, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], buf2, 1, + & MPI_REAL, MPI_SUM, comm, ierr ) +- rnorm = sqrt(abs(rnorm)) ++ rnorm = sqrt(abs(buf2(1))) + else if (bmat .eq. 'I') then + rnorm = psnorm2( comm, n, resid, 1 ) + end if + cnorm = .false. + c + if (msglvl .gt. 2) then +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_naup2: B-norm of residual for compressed factorization') + call psmout (comm, logfil, nev, nev, h, ldh, ndigit, + & '_naup2: Compressed upper Hessenberg matrix H') +diff --git a/PARPACK/SRC/MPI/psnaupd.f b/PARPACK/SRC/MPI/psnaupd.f +index a55f5fba..35e85b74 100644 +--- a/PARPACK/SRC/MPI/psnaupd.f ++++ b/PARPACK/SRC/MPI/psnaupd.f +@@ -650,9 +650,9 @@ subroutine psnaupd + if (info .eq. 2) info = 3 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, mxiter, ndigit, ++ call pivout (comm, logfil, 1, [mxiter], ndigit, + & '_naupd: Number of update iterations taken') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naupd: Number of wanted "converged" Ritz values') + call psvout (comm, logfil, np, workl(ritzr), ndigit, + & '_naupd: Real part of the final Ritz values') +diff --git a/PARPACK/SRC/MPI/psneupd.f b/PARPACK/SRC/MPI/psneupd.f +index c7f939a4..d4090b66 100644 +--- a/PARPACK/SRC/MPI/psneupd.f ++++ b/PARPACK/SRC/MPI/psneupd.f +@@ -617,9 +617,9 @@ subroutine psneupd + c %-----------------------------------------------------------% + c + if (msglvl .gt. 2) then +- call pivout(comm, logfil, 1, numcnv, ndigit, ++ call pivout(comm, logfil, 1, [numcnv], ndigit, + & '_neupd: Number of specified eigenvalues') +- call pivout(comm, logfil, 1, nconv, ndigit, ++ call pivout(comm, logfil, 1, [nconv], ndigit, + & '_neupd: Number of "converged" eigenvalues') + end if + c +diff --git a/PARPACK/SRC/MPI/psngets.f b/PARPACK/SRC/MPI/psngets.f +index c4bf9ac7..91a6d730 100644 +--- a/PARPACK/SRC/MPI/psngets.f ++++ b/PARPACK/SRC/MPI/psngets.f +@@ -226,8 +226,8 @@ subroutine psngets + tngets = tngets + (t1 - t0) + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, kev, ndigit, '_ngets: KEV is') +- call pivout (comm, logfil, 1, np, ndigit, '_ngets: NP is') ++ call pivout (comm, logfil, 1, [kev], ndigit, '_ngets: KEV is') ++ call pivout (comm, logfil, 1, [np], ndigit, '_ngets: NP is') + call psvout (comm, logfil, kev+np, ritzr, ndigit, + & '_ngets: Eigenvalues of current H matrix -- real part') + call psvout (comm, logfil, kev+np, ritzi, ndigit, +diff --git a/PARPACK/SRC/MPI/psnorm2.f b/PARPACK/SRC/MPI/psnorm2.f +index b5fbcd13..8e487b35 100644 +--- a/PARPACK/SRC/MPI/psnorm2.f ++++ b/PARPACK/SRC/MPI/psnorm2.f +@@ -45,7 +45,7 @@ Real function psnorm2 ( comm, n, x, inc ) + c %---------------% + c + Real +- & max, buf, zero ++ & max, buf, zero, buf2(1) + parameter ( zero = 0.0 ) + c + c %---------------------% +@@ -69,15 +69,16 @@ Real function psnorm2 ( comm, n, x, inc ) + psnorm2 = snrm2( n, x, inc) + c + buf = psnorm2 +- call MPI_ALLREDUCE( buf, max, 1, MPI_REAL, ++ call MPI_ALLREDUCE( [buf], buf2, 1, MPI_REAL, + & MPI_MAX, comm, ierr ) ++ max = buf2(1) + if ( max .eq. zero ) then + psnorm2 = zero + else + buf = (psnorm2/max)**2.0 +- call MPI_ALLREDUCE( buf, psnorm2, 1, MPI_REAL, ++ call MPI_ALLREDUCE( [buf], buf2, 1, MPI_REAL, + & MPI_SUM, comm, ierr ) +- psnorm2 = max * sqrt(abs(psnorm2)) ++ psnorm2 = max * sqrt(abs(buf2(1))) + endif + c + c %----------------% +diff --git a/PARPACK/SRC/MPI/pssaitr.f b/PARPACK/SRC/MPI/pssaitr.f +index a0fde737..8ceaebe4 100644 +--- a/PARPACK/SRC/MPI/pssaitr.f ++++ b/PARPACK/SRC/MPI/pssaitr.f +@@ -264,7 +264,7 @@ subroutine pssaitr + integer i, ierr, ipj, irj, ivj, iter, itry, j, msglvl, infol, + & jj + Real +- & rnorm1, wnorm, safmin, temp1 ++ & rnorm1, wnorm(1), safmin, temp1, temp2(1) + save orth1, orth2, rstart, step3, step4, + & ierr, ipj, irj, ivj, iter, itry, j, msglvl, + & rnorm1, safmin, wnorm +@@ -392,9 +392,9 @@ subroutine pssaitr + 1000 continue + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_saitr: generating Arnoldi vector no.') +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_saitr: B-norm of the current residual =') + end if + c +@@ -412,7 +412,7 @@ subroutine pssaitr + c %---------------------------------------------------% + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_saitr: ****** restart at step ******') + end if + c +@@ -572,16 +572,16 @@ subroutine pssaitr + c %----------------------------------% + c + rnorm_buf = sdot (n, resid, 1, workd(ivj), 1) +- call MPI_ALLREDUCE( rnorm_buf, wnorm, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], wnorm, 1, + & MPI_REAL, MPI_SUM, comm, ierr ) +- wnorm = sqrt(abs(wnorm)) ++ wnorm(1) = sqrt(abs(wnorm(1))) + else if (bmat .eq. 'G') then + rnorm_buf = sdot (n, resid, 1, workd(ipj), 1) +- call MPI_ALLREDUCE( rnorm_buf, wnorm, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], wnorm, 1, + & MPI_REAL, MPI_SUM, comm, ierr ) + wnorm = sqrt(abs(wnorm)) + else if (bmat .eq. 'I') then +- wnorm = psnorm2( comm, n, resid, 1 ) ++ wnorm(1) = psnorm2( comm, n, resid, 1 ) + end if + c + c %-----------------------------------------% +@@ -669,9 +669,9 @@ subroutine pssaitr + c + if (bmat .eq. 'G') then + rnorm_buf = sdot (n, resid, 1, workd(ipj), 1) +- call MPI_ALLREDUCE( rnorm_buf, rnorm, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], temp2, 1, + & MPI_REAL, MPI_SUM, comm, ierr ) +- rnorm = sqrt(abs(rnorm)) ++ rnorm = sqrt(abs(temp2(1))) + else if (bmat .eq. 'I') then + rnorm = psnorm2( comm, n, resid, 1 ) + end if +@@ -691,7 +691,7 @@ subroutine pssaitr + c | to enforce ||v(:,1:j)^T * r_{j}|| .le. eps * || r_{j} || | + c %-----------------------------------------------------------% + c +- if (rnorm .gt. 0.717*wnorm) go to 100 ++ if (rnorm .gt. 0.717*wnorm(1)) go to 100 + nrorth = nrorth + 1 + c + c %---------------------------------------------------% +@@ -704,7 +704,7 @@ subroutine pssaitr + 80 continue + c + if (msglvl .gt. 2) then +- xtemp(1) = wnorm ++ xtemp(1) = wnorm(1) + xtemp(2) = rnorm + call psvout (comm, logfil, 2, xtemp, ndigit, + & '_naitr: re-orthonalization ; wnorm and rnorm are') +@@ -769,15 +769,15 @@ subroutine pssaitr + c + if (bmat .eq. 'G') then + rnorm_buf = sdot (n, resid, 1, workd(ipj), 1) +- call MPI_ALLREDUCE( rnorm_buf, rnorm1, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], temp2, 1, + & MPI_REAL, MPI_SUM, comm, ierr ) +- rnorm1 = sqrt(abs(rnorm1)) ++ rnorm1 = sqrt(abs(temp2(1))) + else if (bmat .eq. 'I') then + rnorm1 = psnorm2( comm, n, resid, 1 ) + end if + c + if (msglvl .gt. 0 .and. iter .gt. 0) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: Iterative refinement for Arnoldi residual') + if (msglvl .gt. 2) then + xtemp(1) = rnorm +diff --git a/PARPACK/SRC/MPI/pssapps.f b/PARPACK/SRC/MPI/pssapps.f +index 481cc355..07f96452 100644 +--- a/PARPACK/SRC/MPI/pssapps.f ++++ b/PARPACK/SRC/MPI/pssapps.f +@@ -271,9 +271,9 @@ subroutine pssapps + big = abs(h(i,2)) + abs(h(i+1,2)) + if (h(i+1,1) .le. epsmch*big) then + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, i, ndigit, ++ call pivout (comm, logfil, 1, [i], ndigit, + & '_sapps: deflation at row/column no.') +- call pivout (comm, logfil, 1, jj, ndigit, ++ call pivout (comm, logfil, 1, [jj], ndigit, + & '_sapps: occurred before shift number.') + call psvout (comm, logfil, 1, h(i+1,1), ndigit, + & '_sapps: the corresponding off diagonal element') +@@ -442,7 +442,7 @@ subroutine pssapps + big = abs(h(i,2)) + abs(h(i+1,2)) + if (h(i+1,1) .le. epsmch*big) then + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, i, ndigit, ++ call pivout (comm, logfil, 1, [i], ndigit, + & '_sapps: deflation at row/column no.') + call psvout (comm, logfil, 1, h(i+1,1), ndigit, + & '_sapps: the corresponding off diagonal element') +diff --git a/PARPACK/SRC/MPI/pssaup2.f b/PARPACK/SRC/MPI/pssaup2.f +index ab00ec77..25af8b7f 100644 +--- a/PARPACK/SRC/MPI/pssaup2.f ++++ b/PARPACK/SRC/MPI/pssaup2.f +@@ -241,7 +241,7 @@ subroutine pssaup2 + integer ierr, iter, j, kplusp, msglvl, nconv, nevbef, nev0, + & np0, nptemp, nevd2, nevm2, kp(3) + Real +- & rnorm, temp, eps23 ++ & rnorm, temp, eps23, buf2(1) + save cnorm, getv0, initv, update, ushift, + & iter, kplusp, msglvl, nconv, nev0, np0, + & rnorm, eps23 +@@ -424,13 +424,13 @@ subroutine pssaup2 + iter = iter + 1 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, iter, ndigit, ++ call pivout (comm, logfil, 1, [iter], ndigit, + & '_saup2: **** Start of major iteration number ****') + end if + if (msglvl .gt. 1) then +- call pivout (comm, logfil, 1, nev, ndigit, ++ call pivout (comm, logfil, 1, [nev], ndigit, + & '_saup2: The length of the current Lanczos factorization') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_saup2: Extend the Lanczos factorization by') + end if + c +@@ -469,7 +469,7 @@ subroutine pssaup2 + update = .false. + c + if (msglvl .gt. 1) then +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_saup2: Current B-norm of residual for factorization') + end if + c +@@ -719,7 +719,7 @@ subroutine pssaup2 + end if + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, nconv, ndigit, ++ call pivout (comm, logfil, 1, [nconv], ndigit, + & '_saup2: no. of "converged" Ritz values at this iter.') + if (msglvl .gt. 1) then + kp(1) = nev +@@ -766,7 +766,7 @@ subroutine pssaup2 + if (ishift .eq. 0) call scopy (np, workl, 1, ritz, 1) + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_saup2: The number of shifts to apply ') + call psvout (comm, logfil, np, workl, ndigit, + & '_saup2: shifts selected') +@@ -825,9 +825,9 @@ subroutine pssaup2 + c + if (bmat .eq. 'G') then + rnorm_buf = sdot (n, resid, 1, workd, 1) +- call MPI_ALLREDUCE( rnorm_buf, rnorm, 1, ++ call MPI_ALLREDUCE( [rnorm_buf], buf2, 1, + & MPI_REAL, MPI_SUM, comm, ierr ) +- rnorm = sqrt(abs(rnorm)) ++ rnorm = sqrt(abs(buf2(1))) + else if (bmat .eq. 'I') then + rnorm = psnorm2( comm, n, resid, 1 ) + end if +@@ -835,7 +835,7 @@ subroutine pssaup2 + 130 continue + c + if (msglvl .gt. 2) then +- call psvout (comm, logfil, 1, rnorm, ndigit, ++ call psvout (comm, logfil, 1, [rnorm], ndigit, + & '_saup2: B-norm of residual for NEV factorization') + call psvout (comm, logfil, nev, h(1,2), ndigit, + & '_saup2: main diagonal of compressed H matrix') +diff --git a/PARPACK/SRC/MPI/pssaupd.f b/PARPACK/SRC/MPI/pssaupd.f +index 7f616008..3077ff5f 100644 +--- a/PARPACK/SRC/MPI/pssaupd.f ++++ b/PARPACK/SRC/MPI/pssaupd.f +@@ -652,9 +652,9 @@ subroutine pssaupd + if (info .eq. 2) info = 3 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, mxiter, ndigit, ++ call pivout (comm, logfil, 1, [mxiter], ndigit, + & '_saupd: number of update iterations taken') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_saupd: number of "converged" Ritz values') + call psvout (comm, logfil, np, workl(Ritz), ndigit, + & '_saupd: final Ritz values') +diff --git a/PARPACK/SRC/MPI/psseupd.f b/PARPACK/SRC/MPI/psseupd.f +index 23e2caad..85175870 100644 +--- a/PARPACK/SRC/MPI/psseupd.f ++++ b/PARPACK/SRC/MPI/psseupd.f +@@ -523,9 +523,9 @@ subroutine psseupd + c %-----------------------------------------------------------% + c + if (msglvl .gt. 2) then +- call pivout(comm, logfil, 1, numcnv, ndigit, ++ call pivout(comm, logfil, 1, [numcnv], ndigit, + & '_neupd: Number of specified eigenvalues') +- call pivout(comm, logfil, 1, nconv, ndigit, ++ call pivout(comm, logfil, 1, [nconv], ndigit, + & '_neupd: Number of "converged" eigenvalues') + end if + c +diff --git a/PARPACK/SRC/MPI/pssgets.f b/PARPACK/SRC/MPI/pssgets.f +index 487689b0..dcc08bcb 100644 +--- a/PARPACK/SRC/MPI/pssgets.f ++++ b/PARPACK/SRC/MPI/pssgets.f +@@ -216,8 +216,8 @@ subroutine pssgets + tsgets = tsgets + (t1 - t0) + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, kev, ndigit, '_sgets: KEV is') +- call pivout (comm, logfil, 1, np, ndigit, '_sgets: NP is') ++ call pivout (comm, logfil, 1, [kev], ndigit, '_sgets: KEV is') ++ call pivout (comm, logfil, 1, [np], ndigit, '_sgets: NP is') + call psvout (comm, logfil, kev+np, ritz, ndigit, + & '_sgets: Eigenvalues of current H matrix') + call psvout (comm, logfil, kev+np, bounds, ndigit, +diff --git a/PARPACK/SRC/MPI/pzgetv0.f b/PARPACK/SRC/MPI/pzgetv0.f +index 42cd0865..29f18f5b 100644 +--- a/PARPACK/SRC/MPI/pzgetv0.f ++++ b/PARPACK/SRC/MPI/pzgetv0.f +@@ -185,7 +185,7 @@ subroutine pzgetv0 + save first, iseed, inits, iter, msglvl, orth, rnorm0 + c + Complex*16 +- & cnorm_buf ++ & cnorm_buf, buf2(1) + c + c %----------------------% + c | External Subroutines | +@@ -332,8 +332,9 @@ subroutine pzgetv0 + first = .FALSE. + if (bmat .eq. 'G') then + cnorm_buf = zdotc (n, resid, 1, workd, 1) +- call MPI_ALLREDUCE( cnorm_buf, cnorm, 1, ++ call MPI_ALLREDUCE( [cnorm_buf], buf2, 1, + & MPI_DOUBLE_COMPLEX , MPI_SUM, comm, ierr ) ++ cnorm = buf2(1) + rnorm0 = sqrt(dlapy2 (dble (cnorm),dimag (cnorm))) + else if (bmat .eq. 'I') then + rnorm0 = pdznorm2 ( comm, n, resid, 1) +@@ -393,8 +394,9 @@ subroutine pzgetv0 + c + if (bmat .eq. 'G') then + cnorm_buf = zdotc (n, resid, 1, workd, 1) +- call MPI_ALLREDUCE( cnorm_buf, cnorm, 1, ++ call MPI_ALLREDUCE( [cnorm_buf], buf2, 1, + & MPI_DOUBLE_COMPLEX , MPI_SUM, comm, ierr ) ++ cnorm = buf2(1) + rnorm = sqrt(dlapy2 (dble (cnorm),dimag (cnorm))) + else if (bmat .eq. 'I') then + rnorm = pdznorm2 (comm, n, resid, 1) +@@ -405,9 +407,9 @@ subroutine pzgetv0 + c %--------------------------------------% + c + if (msglvl .gt. 2) then +- call pdvout (comm, logfil, 1, rnorm0, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm0], ndigit, + & '_getv0: re-orthonalization ; rnorm0 is') +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_getv0: re-orthonalization ; rnorm is') + end if + c +@@ -439,7 +441,7 @@ subroutine pzgetv0 + c + if (msglvl .gt. 0) then + cnorm2 = dcmplx (rnorm,rzero) +- call pzvout (comm, logfil, 1, cnorm2, ndigit, ++ call pzvout (comm, logfil, 1, [cnorm2], ndigit, + & '_getv0: B-norm of initial / restarted starting vector') + end if + if (msglvl .gt. 2) then +diff --git a/PARPACK/SRC/MPI/pznaitr.f b/PARPACK/SRC/MPI/pznaitr.f +index 0b8af181..4ec77e4e 100644 +--- a/PARPACK/SRC/MPI/pznaitr.f ++++ b/PARPACK/SRC/MPI/pznaitr.f +@@ -293,7 +293,7 @@ subroutine pznaitr + & betaj, rnorm1, smlnum, ulp, unfl, wnorm + c + Complex*16 +- & cnorm_buf ++ & cnorm_buf, buf2(1) + c + c %----------------------% + c | External Subroutines | +@@ -404,9 +404,9 @@ subroutine pznaitr + 1000 continue + c + if (msglvl .gt. 1) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: generating Arnoldi vector number') +- call pzvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_naitr: B-norm of the current residual is') + end if + c +@@ -426,7 +426,7 @@ subroutine pznaitr + c %---------------------------------------------------% + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: ****** RESTART AT STEP ******') + end if + c +@@ -577,9 +577,10 @@ subroutine pznaitr + c + if (bmat .eq. 'G') then + cnorm_buf = zdotc (n, resid, 1, workd(ipj), 1) +- call MPI_ALLREDUCE( cnorm_buf, cnorm, 1, ++ call MPI_ALLREDUCE( [cnorm_buf], buf2, 1, + & MPI_DOUBLE_COMPLEX, MPI_SUM, comm, ierr ) +- wnorm = sqrt( dlapy2(dble(cnorm),dimag(cnorm)) ) ++ cnorm = buf2(1) ++ wnorm = sqrt( dlapy2(dble(cnorm),dimag(cnorm)) ) + else if (bmat .eq. 'I') then + wnorm = pdznorm2(comm, n, resid, 1) + end if +@@ -653,8 +654,9 @@ subroutine pznaitr + c + if (bmat .eq. 'G') then + cnorm_buf = zdotc (n, resid, 1, workd(ipj), 1) +- call MPI_ALLREDUCE( cnorm_buf, cnorm, 1, ++ call MPI_ALLREDUCE( [cnorm_buf], buf2, 1, + & MPI_DOUBLE_COMPLEX, MPI_SUM, comm, ierr ) ++ cnorm = buf2(1) + rnorm = sqrt( dlapy2(dble(cnorm),dimag(cnorm)) ) + else if (bmat .eq. 'I') then + rnorm = pdznorm2(comm, n, resid, 1) +@@ -757,15 +759,16 @@ subroutine pznaitr + c + if (bmat .eq. 'G') then + cnorm_buf = zdotc (n, resid, 1, workd(ipj), 1) +- call MPI_ALLREDUCE( cnorm_buf, cnorm, 1, ++ call MPI_ALLREDUCE( [cnorm_buf], buf2, 1, + & MPI_DOUBLE_COMPLEX, MPI_SUM, comm, ierr ) +- rnorm1 = sqrt( dlapy2(dble(cnorm),dimag(cnorm)) ) ++ cnorm = buf2(1) ++ rnorm1 = sqrt( dlapy2(dble(cnorm),dimag(cnorm)) ) + else if (bmat .eq. 'I') then + rnorm1 = pdznorm2(comm, n, resid, 1) + end if + c + if (msglvl .gt. 0 .and. iter .gt. 0 ) then +- call pivout (comm, logfil, 1, j, ndigit, ++ call pivout (comm, logfil, 1, [j], ndigit, + & '_naitr: Iterative refinement for Arnoldi residual') + if (msglvl .gt. 2) then + rtemp(1) = rnorm +diff --git a/PARPACK/SRC/MPI/pznapps.f b/PARPACK/SRC/MPI/pznapps.f +index 3e00c903..4952544c 100644 +--- a/PARPACK/SRC/MPI/pznapps.f ++++ b/PARPACK/SRC/MPI/pznapps.f +@@ -283,9 +283,9 @@ subroutine pznapps + sigma = shift(jj) + c + if (msglvl .gt. 2 ) then +- call pivout (comm, logfil, 1, jj, ndigit, ++ call pivout (comm, logfil, 1, [jj], ndigit, + & '_napps: shift number.') +- call pzvout (comm, logfil, 1, sigma, ndigit, ++ call pzvout (comm, logfil, 1, [sigma], ndigit, + & '_napps: Value of the shift ') + end if + c +@@ -306,9 +306,9 @@ subroutine pznapps + if ( abs(dble(h(i+1,i))) + & .le. max(ulp*tst1, smlnum) ) then + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, i, ndigit, ++ call pivout (comm, logfil, 1, [i], ndigit, + & '_napps: matrix splitting at row/column no.') +- call pivout (comm, logfil, 1, jj, ndigit, ++ call pivout (comm, logfil, 1, [jj], ndigit, + & '_napps: matrix splitting with shift number.') + call pzvout (comm, logfil, 1, h(i+1,i), ndigit, + & '_napps: off diagonal element.') +@@ -322,9 +322,9 @@ subroutine pznapps + 40 continue + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, istart, ndigit, ++ call pivout (comm, logfil, 1, [istart], ndigit, + & '_napps: Start of current block ') +- call pivout (comm, logfil, 1, iend, ndigit, ++ call pivout (comm, logfil, 1, [iend], ndigit, + & '_napps: End of current block ') + end if + c +@@ -500,7 +500,7 @@ subroutine pznapps + & '_napps: sigmak = (e_{kev+p}^T*Q)*e_{kev}') + call pzvout (comm, logfil, 1, h(kev+1,kev), ndigit, + & '_napps: betak = e_{kev+1}^T*H*e_{kev}') +- call pivout (comm, logfil, 1, kev, ndigit, ++ call pivout (comm, logfil, 1, [kev], ndigit, + & '_napps: Order of the final Hessenberg matrix ') + if (msglvl .gt. 2) then + call pzmout (comm, logfil, kev, kev, h, ldh, ndigit, +diff --git a/PARPACK/SRC/MPI/pznaup2.f b/PARPACK/SRC/MPI/pznaup2.f +index bf3c2016..78521979 100644 +--- a/PARPACK/SRC/MPI/pznaup2.f ++++ b/PARPACK/SRC/MPI/pznaup2.f +@@ -237,7 +237,7 @@ subroutine pznaup2 + & nevbef, nev0 , np0, eps23 + c + Double precision +- & cmpnorm_buf ++ & cmpnorm_buf, buf2(1) + c + c %-----------------------% + c | Local array arguments | +@@ -401,7 +401,7 @@ subroutine pznaup2 + iter = iter + 1 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, iter, ndigit, ++ call pivout (comm, logfil, 1, [iter], ndigit, + & '_naup2: **** Start of major iteration number ****') + end if + c +@@ -414,9 +414,9 @@ subroutine pznaup2 + np = kplusp - nev + c + if (msglvl .gt. 1) then +- call pivout (comm, logfil, 1, nev, ndigit, ++ call pivout (comm, logfil, 1, [nev], ndigit, + & '_naup2: The length of the current Arnoldi factorization') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naup2: Extend the Arnoldi factorization by') + end if + c +@@ -443,7 +443,7 @@ subroutine pznaup2 + update = .false. + c + if (msglvl .gt. 1) then +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_naup2: Corresponding B-norm of the residual') + end if + c +@@ -674,7 +674,7 @@ subroutine pznaup2 + end if + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, nconv, ndigit, ++ call pivout (comm, logfil, 1, [nconv], ndigit, + & '_naup2: no. of "converged" Ritz values at this iter.') + if (msglvl .gt. 1) then + kp(1) = nev +@@ -714,7 +714,7 @@ subroutine pznaup2 + end if + c + if (msglvl .gt. 2) then +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naup2: The number of shifts to apply ') + call pzvout (comm, logfil, np, ritz, ndigit, + & '_naup2: values of the shifts') +@@ -771,8 +771,9 @@ subroutine pznaup2 + c + if (bmat .eq. 'G') then + cmpnorm_buf = zdotc (n, resid, 1, workd, 1) +- call MPI_ALLREDUCE( cmpnorm_buf, cmpnorm, 1, ++ call MPI_ALLREDUCE( [cmpnorm_buf], buf2, 1, + & MPI_DOUBLE_COMPLEX, MPI_SUM, comm, ierr ) ++ cmpnorm = buf2(1) + rnorm = sqrt(dlapy2(dble(cmpnorm),dimag(cmpnorm))) + else if (bmat .eq. 'I') then + rnorm = pdznorm2(comm, n, resid, 1) +@@ -780,7 +781,7 @@ subroutine pznaup2 + cnorm = .false. + c + if (msglvl .gt. 2) then +- call pdvout (comm, logfil, 1, rnorm, ndigit, ++ call pdvout (comm, logfil, 1, [rnorm], ndigit, + & '_naup2: B-norm of residual for compressed factorization') + call pzmout (comm, logfil, nev, nev, h, ldh, ndigit, + & '_naup2: Compressed upper Hessenberg matrix H') +diff --git a/PARPACK/SRC/MPI/pznaupd.f b/PARPACK/SRC/MPI/pznaupd.f +index 979402b3..7d6ea4c4 100644 +--- a/PARPACK/SRC/MPI/pznaupd.f ++++ b/PARPACK/SRC/MPI/pznaupd.f +@@ -626,9 +626,9 @@ subroutine pznaupd + if (info .eq. 2) info = 3 + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, mxiter, ndigit, ++ call pivout (comm, logfil, 1, [mxiter], ndigit, + & '_naupd: Number of update iterations taken') +- call pivout (comm, logfil, 1, np, ndigit, ++ call pivout (comm, logfil, 1, [np], ndigit, + & '_naupd: Number of wanted "converged" Ritz values') + call pzvout (comm, logfil, np, workl(ritz), ndigit, + & '_naupd: The final Ritz values') +diff --git a/PARPACK/SRC/MPI/pzneupd.f b/PARPACK/SRC/MPI/pzneupd.f +index 4bb1173a..7b6e7fe1 100644 +--- a/PARPACK/SRC/MPI/pzneupd.f ++++ b/PARPACK/SRC/MPI/pzneupd.f +@@ -558,9 +558,9 @@ subroutine pzneupd + c %-----------------------------------------------------------% + c + if (msglvl .gt. 2) then +- call pivout(comm, logfil, 1, numcnv, ndigit, ++ call pivout(comm, logfil, 1, [numcnv], ndigit, + & '_neupd: Number of specified eigenvalues') +- call pivout(comm, logfil, 1, nconv, ndigit, ++ call pivout(comm, logfil, 1, [nconv], ndigit, + & '_neupd: Number of "converged" eigenvalues') + end if + c +diff --git a/PARPACK/SRC/MPI/pzngets.f b/PARPACK/SRC/MPI/pzngets.f +index 97bc4070..cb58f5ce 100644 +--- a/PARPACK/SRC/MPI/pzngets.f ++++ b/PARPACK/SRC/MPI/pzngets.f +@@ -177,8 +177,8 @@ subroutine pzngets ( comm, ishift, which, kev, np, ritz, bounds) + tcgets = tcgets + (t1 - t0) + c + if (msglvl .gt. 0) then +- call pivout (comm, logfil, 1, kev, ndigit, '_ngets: KEV is') +- call pivout (comm, logfil, 1, np, ndigit, '_ngets: NP is') ++ call pivout (comm, logfil, 1, [kev], ndigit, '_ngets: KEV is') ++ call pivout (comm, logfil, 1, [np], ndigit, '_ngets: NP is') + call pzvout (comm, logfil, kev+np, ritz, ndigit, + & '_ngets: Eigenvalues of current H matrix ') + call pzvout (comm, logfil, kev+np, bounds, ndigit, +diff --git a/scripts/travis_fedora.sh b/scripts/travis_fedora.sh +new file mode 100755 +index 00000000..0bfa5736 +--- /dev/null ++++ b/scripts/travis_fedora.sh +@@ -0,0 +1,51 @@ ++#!/bin/sh ++## -e : Make sure all errors cause the script to fail ++## -x be verbose; write what we are doing, as we do it ++set -ex ++## Should we init a container? ++if [ ".$1" = .setup ] ++then ++ # fedora ++ # note: when you PR, docker-cp provides, in the container, the branch associated with the PR (not master where there's nothing new) ++ # 1. docker create --name mobydick IMAGE CMD <=> create a container (= instance of image) but container is NOT yet started ++ # 2. docker cp -a ${TRAVIS_BUILD_DIR} mobydick:/tmp <=> copy git repository (CI worker, checkout-ed on PR branch) into the container ++ # note: docker-cp works only if copy from/to containers (not images) ++ # 3. docker start -a mobydick <=> start to run the container (initialized with docker-cp) ++ test . != ".$2" && mpi="$2" || mpi=openmpi ++ test . != ".$3" && version="$3" || version=latest ++ time sudo docker pull registry.fedoraproject.org/fedora:$version || ++ sudo docker pull fedora:$version ++ time sudo docker create --name mobydick fedora:$version \ ++ /tmp/arpack-ng/scripts/travis_fedora.sh $mpi ++ time sudo docker cp -a ${TRAVIS_BUILD_DIR} mobydick:/tmp ++ time sudo docker start -a mobydick ; e=$? ++ exit $e ++fi ++ ++test . != ".$1" && mpi="$1" || mpi=openmpi ++ ++## If we are called as root, setup everything ++if [ $UID -eq 0 ] ++then ++ time dnf -y upgrade ++ time dnf -y install environment-modules git \ ++ gfortran openblas-devel cmake ${mpi}-devel make gcc-c++ ++ useradd test ++ chown -R test /tmp ++ sudo -u test $0 $mpi ++## If we are called as normal user, run test ++else ++ . /etc/profile.d/modules.sh ++ module load mpi ++ export OMPI_MCA_rmaps_base_oversubscribe=yes ++ cd /tmp ++ cd arpack-ng ++ git status ++ git log -2 ++ mkdir -p build && cd build ++ time cmake -DEXAMPLES=ON -DMPI=ON -DICB=ON .. ++ export VERBOSE=1 ++ time make all ++ time make test ++ tail -n 300 ./Testing/Temporary/LastTest.log ++fi