Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The preprocessor should be standardized #65

Open
klausler opened this issue Nov 1, 2019 · 59 comments
Open

The preprocessor should be standardized #65

klausler opened this issue Nov 1, 2019 · 59 comments

Comments

@klausler
Copy link

klausler commented Nov 1, 2019

Most Fortran compilers support some form of source code preprocessing using a syntax similar to the preprocessing directives and macro references in C/C++. The behavior of the preprocessing features in the various Fortran compilers varies quite a bit (see https://github.com/flang-compiler/f18/blob/master/documentation/Preprocessing.md for a summary of the situation). To improve code portability, the Fortran standard should accept the existence of preprocessing, and standardize the behaviors that are common and/or most useful.

@gklimowicz edit: The more recent link is Preprocessing.md.

@sblionel
Copy link
Member

sblionel commented Nov 3, 2019

I'd guess you are unaware that Fortran 2003 contained an optional Part 3 that was conditional compilation. I'm not sure any vendor implemented it, there was little interest, and it got withdrawn in F2008. I really don't see us going back there.

The reality is that people use cpp (or a variant) and that seems to work for most everyone. The better question to ask is "what are the use cases for a preprocessor, and can better language design satisfy that need?" Look at the C interop stuff, for example - it eliminates a large swath of what preprocessors were used for. A proper generics feature would eliminate more.

@klausler
Copy link
Author

klausler commented Nov 4, 2019

I am aware of CoCo and how poorly it fared. The fact remains, C-like preprocessing is a real-world feature that is available in all compilers. Fortran would be more portable if the language acknowledged the existence of preprocessing and defined a standardized portable subset of behavior.

@certik
Copy link
Member

certik commented Nov 4, 2019

@klausler thanks for bringing this up. Related to this is the fact that the default behavior is to not use a preprocessor for .f90 files and to use it for .F90 files. And so in practice one must rename a .f90 file to .F90 in order for the preprocessor be be applied automatically, which is annoying.

Why not standardize a subset of the current behavior, and automatically apply it to .f90 files?

@sblionel's counterpoint is valid though --- just like C++ is moving away from using preprocessor by adding language features, Fortran is moving in that direction too. So I think the counterpoint is to not standardize a preprocessor, but rather improve language features so that the preprocessor is not needed.

Besides templates, one common use case for a preprocessor that I have seen in many codes is a custom ASSERT macro, which is empty in Release mode, and in Debug mode it checks the condition and prints out the filename and line number. I have created #70 for this.

@sblionel
Copy link
Member

sblionel commented Nov 4, 2019

Keep in mind that the Fortran standard knows nothing about .f90 files or source files in general. It would be a broad expansion to try to legislate behavior based on file types. Note also that some operating systems are not case-sensitive for file names.

I don't think that the current state of preprocessing is broken enough to warrant the standard trying to get involved.

@klausler
Copy link
Author

klausler commented Nov 4, 2019

The standard doesn't have any concept of source files, much less source file names. The f18 compiler always recognizes and follows preprocessing directives and applies macro replacement, ignoring the source file name suffix, since its preprocessing capabilities are built in to its source prescanning and normalization phase, and are essentially free when not used.

I absolutely agree that some common subset of preprocessing behavior should be standardized. This is the one major part of Fortran that every compiler provides that is not covered by the standard language document; but perhaps improving portability of Fortran programs across vendors, writing interoperable header files usable by Fortran and C++, or providing safe guarantees of portability into the future are no longer primary objectives.

As part of defining f18's preprocessing behavior, I performed a survey of various Fortran compilers and wrote a collection of tests to check how their preprocessors interacted with difficult Fortran features like line continuation, fixed form, &c. The current state of the art is far more fragmented than I expected to find (see my link above for details and a compiler comparison table), and none of the existing compilers seemed to stand out as a model to be followed.

EDIT: The standard does have a concept of source files in the context of INCLUDE lines, of course; my first sentence was too broad.

@certik
Copy link
Member

certik commented Nov 5, 2019

@sblionel yes, the standard currently does not have concept of source files, but perhaps it should. I see this GitHub repository as broader than what is strictly covered by the standard today --- because we might decide in the future to include some such things (such as more details about source files) into the standard.

And I must say I agree with @klausler's on this. There is a lot that Fortran should standardize and improve. Perhaps it does not need to go into the standard itself, but then let there be a document that we all agree upon, we do not need to call it the "standard" (perhaps we can call it "vendor recommendation"), but it will achieve what we want: improving portability across vendors.

@aradi
Copy link
Contributor

aradi commented Nov 12, 2019

I am not sure, whether pre-processing must be necessarily implemented within the compiler or being standardized at all. Using an appropriate interpreter language (e.g. Python) it is possible to implement a pre-processor satisfying all requirements @klausler formulated (and even much more) within a single file. You add this one file to your project, and you can build your project with all Fortran compilers, as your pre-processor on board makes sure that the compiler only sees standard conforming source files. You will have to have the interpreter of course around whenever the project is built, but by choosing something wide-spread as Python, it would be the case on almost all systems automatically.

Disclaimer: I may be biased as I myself also wrote such a one file pre-processor (Fypp) which apparently has found its way into several Fortran projects.

@gronki
Copy link

gronki commented Nov 12, 2019

@aradi I think you having written fypp perfectly proves that there is a need for a preprocessor. Contrary to what @sblionel said, it seems that generic programing will not be around for years, whereas a good preprocessor (so not cpp...) can solve 95% of the use cases for generic programming.

@certik
Copy link
Member

certik commented Nov 12, 2019

@aradi Thanks for the link, I wasn't aware of Fypp. Its syntax seems incompatible with the other preprocessors though. I can see that it has more features, so that's probably the reason. But having the preprocessor syntax standardized I think is valuable.

@aradi
Copy link
Contributor

aradi commented Nov 12, 2019

@certik The syntax is different from the usual cpp-derived pre-processors to make sure, nobody tries to run those on files meant to be processed Fypp. 😉 (Also, it allows better escaping and ensures better prevention against unwanted substitutions, which is sometimes tricky with cpp-based approaches.)

@gronki Fypp was actually written in order to allow for easy generation of templates, so yes, a pre-processor can help to work around (but not solve) many of the generic programming needs. Still, I am not sure, whether it is a good idea to use "standardized pre-processor based workarounds" for generics, as we will stick then with them for the next few decades. 😄

@sblionel
Copy link
Member

@gronki , I never suggested a timeframe for generics. But there is a lot of resistance to adding features that paper over shorter-term problems. But there is an existing preprocessor solution that works, why complicate issues with trying to wedge preprocessing into the standard? I'd prefer the energy and cycles to be put into solving the language issues that make people reach for a preprocessor.

@gronki
Copy link

gronki commented Nov 12, 2019 via email

@gronki
Copy link

gronki commented Nov 12, 2019

@sblionel sure you have the point. Agree that including it in the core standard could be a waste of resources. But what about TS, just like it was with CoCo? Is there any idea or information why CoCo "lost" with cpp (despite implementation being available)? I didn't see anything wrong with it other than it just didn't take off.

@certik
Copy link
Member

certik commented Nov 12, 2019

In terms of priorities, I think I agree with @sblionel that it makes sense to invest our efforts in getting generics into the standard, rather than prioritize a short term solution over the long term. That's a good point.

@sblionel
Copy link
Member

@gronki, CoCo was before my time on the committee. All I know is that vendors didn't implement it and users didn't ask for it - they continued to use cpp.

CoCo was an optional part of the standard, not a TS. A TS has the expectation that it will be incorporated in a future standard largely unchanged. Our experience so far with optional standard parts is that neither users nor implementors are all that interested in them.

I will also trot out my oft-used point that everything that goes into the standard (and a TS or optional part is no different) has a cost, in terms of the resources and time of the committee members.

@gronki
Copy link

gronki commented Nov 12, 2019

Thank you for great explanation! I did not recognize the difference between the optional part and the TS.

@klausler
Copy link
Author

@gronki , I never suggested a timeframe for generics. But there is a lot of resistance to adding features that paper over shorter-term problems. But there is an existing preprocessor solution that works, why complicate issues with trying to wedge preprocessing into the standard? I'd prefer the energy and cycles to be put into solving the language issues that make people reach for a preprocessor.

It's not either/or, and people have work to do today. Fortran compilers have preprocessors, people use them, and code portability would benefit from standardizing them to the extent that they can be.

@Leonard-Reuter
Copy link
Contributor

It's not either/or, and people have work to do today. Fortran compilers have preprocessors, people use them, and code portability would benefit from standardizing them to the extent that they can be.

The preprocessors of fortran compilers are in fact standardized by the cpp standard. I think this is more than sufficient. (eg. C18 (ISO/IEC 9899:2018) 6.10 preprocessor directives

@klausler
Copy link
Author

klausler commented Dec 16, 2019

It's not either/or, and people have work to do today. Fortran compilers have preprocessors, people use them, and code portability would benefit from standardizing them to the extent that they can be.

The preprocessors of fortran compilers are in fact standardized by the cpp standard. I think this is more than sufficient. (eg. C18 (ISO/IEC 9899:2018) 6.10 preprocessor directives

Running Fortran through a "bare" cpp that doesn't know about Fortran commentary, line continuations, column-73 fixed-form line truncation, CHARACTER concatenation operator, and built-in INCLUDE lines will produce poor results. Real production Fortran compilers either use a modified cpp or implement a cpp-like facility internally. The ways in which the Fortran features that I just mentioned interact with the preprocessors' implementations of their features (directive processing, macro replacement) show too much variation across Fortran compilers, and that is why they need standardization by a committee that should take code portability seriously.

@septcolor
Copy link

Just for clarity, is this proposal focusing on standardizing the behavior of existing preprocessors (cpp, fpp) rather than extending or adding new features for more robust code conversion or metaprogramming facilities (i.e., the latter should be posted elsewhere)?

@klausler
Copy link
Author

I do not exclude the addition of new features to a standardized preprocessor in my concept, but I don't know of any particular new feature that I would want that isn't already implemented in at least one compiler. What do you have in mind?

@aradi
Copy link
Contributor

aradi commented Dec 18, 2019

I think, in case generics do not make it into the standard, loop constructs would be useful to generate various specific cases for a given template.

We are using that a lot for creating library functions with the same functionality but different data types / kinds. (For example wrapping MPI-functions as in MPIFX). We use currently Fypp for that (shipping Fypp with each project), but would be more than happy to change it to any other pre-processor language, provided we can be sure, each compiler can deal with it.

@septcolor
Copy link

septcolor commented Dec 18, 2019

As for code conversion, my immediate use case is to iterate over multiple symbols to generate codes from a templated one. For example I often have a pattern like

open( newunit= data % file_foo, file="foo.dat" )
open( newunit= data % file_bar, file="bar.dat" )
open( newunit= data % file_baz, file="baz.dat" )
! similar lines follow
close( data % file_foo )
close( data % file_bar )
close( data % file_baz )
! similar lines follow

to open or close files for each type component. Although those lines are the same except for property names (like foo), I cannot write them conveniently at once (with standard Fortran/cpp/fpp). A similar situation occurs when doing some operation for all (or some of) type components, e.g. scaling by some factor

data % ene_foo = data % ene_foo * scale
data % ene_bar = data % ene_bar * scale
data % ene_baz =ydata % ene_baz * scale
! similar lines follow

If a preprocessor supports iteration over symbols, I may be able to write, e.g.

#for x in [ foo, bar, baz, ... ]
data % ene_$x = data % ene_$x * scale
#endfor

(where I suppose "$" is interpreted as in Bash). Fypp already has this facility

interface myfunc
#:for dtype in ['real', 'dreal', 'complex', 'dcomplex']
  module procedure myfunc_${dtype}$
#:endfor
end interface myfunc

and Julia also uses such loops over symbols sometimes, e.g.
https://github.com/JuliaLang/julia/blob/master/base/math.jl#L1100
https://github.com/JuliaLang/julia/blob/master/base/math.jl#L548
https://github.com/JuliaLang/julia/blob/master/base/math.jl#L382

I think it would be useful if standard Fortran or preprocessor will support such a feature somehow, if Fortran generics (discussed on Github) may not cover such a feature.

Apart from the feature request, I am a bit concerned that the extensive use of "#" or "$" can make a code very noisy or cryptic (the worst case of which might be Pe*l??), which I hope to be avoided (if possible...). In particular, if cpp/fpp requires the directive to start from column 1 with "#", the code may become less readable (as I often feel for codes with a lot of "#ifdef MPI").

@septcolor
Copy link

Another feature request for a preprocessor (according to StackOverflow) might be to support output of multi-line processed sources (without using semicolon).

Fortran Preprocessor Macro with Newline
https://stackoverflow.com/questions/59309458/fortran-preprocessor-macro-with-newline

@klausler
Copy link
Author

Another feature request for a preprocessor (according to StackOverflow) might be to support output of multi-line processed sources (without using semicolon).

Fortran Preprocessor Macro with Newline
https://stackoverflow.com/questions/59309458/fortran-preprocessor-macro-with-newline

The reason he or she does not want to use semicolons is fear of a 132-character line limit, which is something that a compiler with a built-in preprocessing stage should enforce before macro expansion, not after.

@aradi
Copy link
Contributor

aradi commented Dec 19, 2019

Still, I can think of scenarios where passing multiline arguments to macros would be useful. Thinking about macro based unit test systems (as in Google Test or Catch for C++) you would need the ability to pass multiline macros to Fortran.

@klausler
Copy link
Author

Multi-line arguments are a different problem from multi-statement expansions. Both should work; specifically, Fortran line continuations should be usable within macro invocations.

@klausler
Copy link
Author

This proposal seems unworkable to me. It means that a compiler that implements all but one minor feature of F2018 can't say it supports F2018. When you have a compiler such as gfortran that isn't even full F2003, yet has features from F2008 and F2018, what would you have it define?

As I wrote above. J3 did standardize a preprocessor and it was roundly ignored by the community. I have also observed that programmers are often misinformed about the revision of the standard they are using.

I would like to see preprocessors die and would not want to put anything in the standard that encourages their use. They're useful today because the language lacks features such as robust generics/templates, but work in that area is progressing with some 202X features helping. Past use to deal with C interoperability is no longer necessary. If you feel you need to write different code for different compilers, it would better to use the greatest common subset and enhance that as the compilers you use catch up. The alternative feels like a testing and maintenance nightmare to me.

A common strategy I have seen is to not use a feature that isn't supported in at least three compilers. I observe that this is likely to be less of a problem over the coming years as I see compilers catching up to the standard much more quickly than in the past 5-10 years.

I'm not proposing anything for the standard here. I understand that you don't want to standardize preprocessing, and that you get to decide whether preprocessing is standardized or not. Fine.

But preprocessing is still used in real codes by real users, it's part of every Fortran implementation, and I would like to provide the best implementation of preprocessing for them that I can in f18 in the absence of guidance from a standard.

@sblionel
Copy link
Member

I understand that you don't want to standardize preprocessing, and that you get to decide whether preprocessing is standardized or not.

No, I don't get to decide. I am just one vote among all WG5. But as I have said, we already did standardize preprocessing (though this happened before I was on the committee) and it was ignored and has now been dropped from the standard. Nothing I or WG5 say will stop people from using preprocessing using the tools (cpp) they are already using.

@klausler
Copy link
Author

I understand that you don't want to standardize preprocessing, and that you get to decide whether preprocessing is standardized or not.

No, I don't get to decide. I am just one vote among all WG5. But as I have said, we already did standardize preprocessing (though this happened before I was on the committee) and it was ignored and has now been dropped from the standard. Nothing I or WG5 say will stop people from using preprocessing using the tools (cpp) they are already using.

Does any production Fortran compiler actually use cpp? It doesn't interact well with line continuation, line truncation, Hollerith, or (especially) INCLUDE. I don't know of a compiler that preprocesses with a stock cpp.

CoCo wasn't rejected by users and implementors because people didn't need or want preprocessing. CoCo was a failure because it was gratuitously different from the C-like preprocessing and local tooling that people were already using, and it wasn't a better solution (in fact, it's really weird and ugly).

@sblionel
Copy link
Member

sblionel commented Aug 1, 2020

I don't know what every compiler uses, but I often see cpp invoked with an option that better handles Fortran. ifort has its own fpp that accepts cpp directives. I think pretty much every Fortran in common use has something similar.

My point was that people are already using cpp or a cpp-like preprocessor that they already have. I agree that CoCo was "gratuitiously different", but what the users told us was that cpp (or cpp-like) was working for them.

@klausler
Copy link
Author

klausler commented Aug 1, 2020

And if the common subset of the behaviors of Fortran-aware preprocessors (and built-in preprocessing phases) were to be documented, then both users and implementors would know what's portable and what's not. This is exactly the sort of thing that should be in a de jure standard. But that's not going to happen, and the best I could do for f18 was to determine that common subset myself, figure out the most reasonable behavior in edge cases where compilers differ, and ask users for guidance. If you have better advice for an implementor, I'm all ears.

EDIT: See here for a table of preprocessing behaviors of various compilers, using fixed and free form samples in this directory. As one can see, things are not terribly compatible today, but there is a common portable subset.

@certik
Copy link
Member

certik commented Aug 1, 2020

@klausler I agree with you and I think the best we can do is to get a community / vendors consensus of what should be supported and document it. Most production Fortran codes that I have seen use macros in some form, and thus compilers must support them.

Thank you for taking the lead on that in the document you shared.

@certik
Copy link
Member

certik commented Mar 5, 2021

@certik
Copy link
Member

certik commented Mar 5, 2021

We are currently figuring out how to add preprocessor support for LFortran. I can see that in Flang it is integrated into the compiler. I don't know if it is feasible to pre-process ahead of time (de-coupled from the compiler) and keep line numbers consistent, this is also relevant:

Last, if the preprocessor is not integrated into the Fortran compiler, new Fortran continuation line markers should be introduced into the final text.

That would be my preferable approach, but I assume the down side are worse error messages and possibly it is slower?

@klausler
Copy link
Author

klausler commented Mar 5, 2021

In f18 the first phase of compilation is called prescanning. It reads the original source file, and expands any INCLUDE or #include files, normalizes the source in many ways (preprocessing directives, macro expansion, line continuation, comment removal, space insertion for fixed form Hollerith, space removal / collapsing, case lowering, &c.) to construct a big contiguous string in memory. This string is what the parser parses, and it makes parsing so much easier. Each byte in that string can be mapped to its original source byte or macro expansion or whatever by means of an index data structure. In the parser and semantics we just use const char * pointers to represent source locations in messages and the name strings of symbols, and those get mapped back to source locations for contextual error message reporting later.

@aradi
Copy link
Contributor

aradi commented Mar 11, 2021

I think, it would be much more important, that all compiler accept and process #line directives in the source code. Then people can use whatever pre-processor suits their purpose most. As long as the preprocessor emits those directives (as for example Fypp does if requested), the user would always get correct error messages.

@certik
Copy link
Member

certik commented Mar 11, 2021

I think the #line directives only ensure the correct line is being reported, but if your pre-processor expands a macro, it changes the line itself, so the compiler will report an error on the expanded line that is not what the user sees before calling the pre-processor.

@aradi
Copy link
Contributor

aradi commented Mar 11, 2021

No, if the source file is around, the compiler will show the right line. You can test it yourself.

test.F90:

#:def ASSERT(cond)
  #:if defined("DEBUG")
  $:cond
  #:endif
#:enddef

program test
  implicit none

  #! expression is incorrect to trigger compiler error
  @:ASSERT(1 ?= 2)

end program test

Executing

fypp -n -DDEBUG test.F90 > test.f90; gfortran test.f90

you obtain the error message:

test.F90:11:3:

   11 |   @:ASSERT(1 ?= 2)
      |   1
Error: Invalid character in name at (1)

@certik
Copy link
Member

certik commented Mar 11, 2021

My bad, you are right. The only issue will happen if there is a syntax error in the expanded ASSERT macro, wouldn't it? Like this:

:ASSERT(1 ?= 2)@

I would expect it to show an incorrect column number.

@aradi
Copy link
Contributor

aradi commented Mar 12, 2021

If the error occurs in the expanded text, the error message can be indeed confusing. E.g.

#:def ASSERT(cond)
  if (.invalid. ${cond}$) error stop "Assert failed"
#:enddef

program test
  implicit none

  @:ASSERT(1 == 2)

end program test

with

fypp -n  test.F90 > test.f90; gfortran test.f90

results in

test.F90:9:6:

    9 |   @:ASSERT(1 == 2)
      |      1
Error: Unknown operator ‘invalid’ at (1)

In this case, one would have to drop the line marker generation as with

fypp test.F90 > test.f90; gfortran test.f90

to obtain

test.f90:5:6:

    5 |   if (.invalid. 1 == 2) error stop "Assert failed"
      |      1
Error: Unknown operator ‘invalid’ at (1)

But, this is independent of, whether the pre-pocessor is external or built in into the compiler. Do you show the original line or the expanded line (or both), when the error occurs in an expanded code? Whichever strategy one goes for, it can be equally realized with built-in as well as with external pre-processors (provided they generate line marker directives and the compiler understands them).

@certik
Copy link
Member

certik commented Mar 12, 2021

@aradi I am glad you posted here, I think you are right. Indeed the compiler could now about the pre-processor, as a black box, and it could show errors either in the expanded form, or unexpanded form, and in each way it would show the correct line.

How would it know the line comes from a macro expansion? Well, I guess once it found the line with the error in the expanded form, it can compare the unexpanded line (from the #line directive) and if it differs, it can show both, i.e. the error can look something like this:

test.f90:5:6:

    5 |   if (.invalid. 1 == 2) error stop "Assert failed"
      |      1
Error: Unknown operator ‘invalid’ at (1)

test.F90:9:6:

    9 |   @:ASSERT(1 == 2)
      |      2
Note: the line at (1) where the error happens came from a macro expansion at (2)

If the line does not differ, then it can simply show the unexpanded form, as that will be the one which users see in their files.

I think this might be a very acceptable approach, with the advantage that we can use different pre-processors, such as fypp.

Summary of the black box approach:

  • Correct line and column numbers in the expanded form
  • Correct line, but potentially incorrect column number in the unexpanded form

I can still see some potential advantages of integrating the pre-processor more deeply with the compiler:

  • Potentially faster (no need to write a new source file out and to parse #line directives)
  • It knows which macro got expanded to what, so clang for example gives you error messages almost as if macros were part of the language itself
  • Part of the previous point is that it will give you correct column numbers in the unexpanded form

But the black box approach is not bad, and one can implement both.

@aradi
Copy link
Contributor

aradi commented Mar 12, 2021

@certik I fully agree. Yes, the column number will be incorrect in the unexpanded form. And yes, a tight integration can give even deeper insights. But that assumes the existence of a well defined (standardized) pre-processor language which all Fortran compilers implement and follow, and which covers all the pre-processing needs people may come up with. In the mean time, the line directives can serve as a "bridging technology", allowing the usage of custom pre-processors.

@certik
Copy link
Member

certik commented Mar 12, 2021

I created an issue at https://gitlab.com/lfortran/lfortran/-/issues/281 to implement this in LFortran.

@klausler
Copy link
Author

 f18 -fsyntax-only ppdemo.f90
./ppdemo.f90:2:10: error: Actual argument for 'x=' has bad type 'CHARACTER(1)'
  print *, CALL(sin, 'abc')
           ^^^^^^^^^^^^^^^^
./header.h:1:1: in a macro defined here
  #define CALL(f,x) f(x)
  ^^
./ppdemo.f90:1:1: included here
  include "header.h"
  ^^^^^^^^^^^^^^^^^^
that expanded to:
  sin( 'abc')
  ^
f18: Semantic errors in ppdemo.f90

I think that it's necessary to have an integrated preprocessing facility in the same part of the compiler that's handling INCLUDE statements, line continuation, case normalization, &c. It's not hard to implement and it should be standardized.

@jeffhammond
Copy link

Does any production Fortran compiler actually use cpp? It doesn't interact well with line continuation, line truncation, Hollerith, or (especially) INCLUDE. I don't know of a compiler that preprocesses with a stock cpp.

gfortran does, and I know this because I have been frustrated in the past by this issue: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=28662. Not only does gfortran invoke cpp, but it does so while disabling newer features that I'm using all the time in C99/C++11 code.

@urbanjost
Copy link

urbanjost commented Jul 2, 2022

FYI:

Doesn't solve the general question of standardizing a preprocessor, but might be useful considering it is not standarized ...

A trick some may or might not know with gfortran is it can read code from stdin. That can be handy for issues like this.

It might still be a problem if there are a lot of files doing the compiling that just have a "gfortran" command in them or other issues;
but you can call cpp(1) by itself first and make a processed .f90 file (of course); but less known with gfortran is you can get it to read from stdin, so if you can easily replace your "gfortran" with cpp $FILE|"gfortran -x f95 -" you can use any preprocessor options or preprocessor you want.

  #!/bin/bash
cat >x3.F90 <<\EOF
#define msg(x) print *, #x
program testit
implicit none
real :: w=1234.5678
   write(*,*)w
   write(*,*)__LINE__
   write(*,*)__DATE__
   write(*,*)__TIME__
   msg('Hello')
end program testit
EOF
rm -f ./a.out
cpp  x3.F90|gfortran -x f95 -
#cpp  x3.F|gfortran -x f77 -
./a.out
exit

I have a little script that I call "gfortran+" that does several preconditioning steps but otherwise lets you call it like gfortran that takes advantage of that; that is particularly handy with fpm because I just set the environment variable FPM_FC to "gfortran+"
and then can just do "fpm build" and so on.

Personally, I gave up and have my own preprocessor :>

@jeffhammond
Copy link

jeffhammond commented Jul 4, 2022

Yeah, there are lots of workarounds. NWChem has been able to do a two-step to preprocess since forever because of bad Fortran compilers that can't preprocess correctly. The point here is that there is no excuse for users to have to do this. Every decent Fortran compiler can implement the thing users want.

@aradi
Copy link
Contributor

aradi commented Jul 4, 2022

@urbanjost Neat, indeed. But tricks, which only work with gfortran, are not really useful for most projects, as people may want to compile them also with other compilers.

@jeffhammond Yes, we did exactly the same in DFTB+ (postprocessing a cpp-preprocessed source, to make sure, it is standard conforming) for quite a while. Then, we decided to write our own preprocessor (fypp), which has a well defined behavior on all platforms and spits out standard conforming source files. I think, this is still the best option as long as there is not a standardized pre-processor with a well defined, platform and compiler independent behavior, which is implemented in all popular compilers. (So, at least for the next 10-20 years?)

@urbanjost
Copy link

The problem has not been a lack of pre-processors; from m4 to fypp to prep to fpp to cpp too ... the problem has been standardization and uniiversal availability. A bash shell makes an excellent preprocessor for example; with the code in here documents, and ships with most platforms and takes a minute to install on others. Variable expansion, looping, conditionals, call any system utility... and many people are already familiar with the syntax required. Really does make a superb preprocessor.

It has been the lack of anything in the standard and that for most users the "ttuw" program (the thing users want) seems to be somethinig very close to fpp; which is obviously close to cpp. With ISO_C_BINDING preprocessing interest had nearly vanished; now with an uptick in interest in templating it has gained more interest again. Except when the processor is also in Fortran (or C) the processors have depended on specific environments that have waxed and waned. Something like Java, Ruby, Python, Perl ... have been assumed available (Definitely not the case, I have worked on multiple clusters where none were available) so nothing is likely to resolve the issue except for a fpp(1) program defined as part of the lanuage, and then it will not have some capability someone desires and the cycle will continue. So all I depend on the compiler to do is compile standard Fortran; and try to pick preprocessing tools that are readily available on any likely platform I need. As there are less and less environments, and it becomes easier to just have a little portable environment you can use like an app (ie VMs, containers, ...) the problem is basically turned on it's head but will continue unless/until the language defines it. And remember most languages that require preprocessing are always trying to get rid of if. Look no futher than C to see were freely supporting pre-processing leads.

@klausler
Copy link
Author

Preprocessor standardization was the most common item on Fortran 202Y wish lists (except maybe for templates, which are already on the docket).

https://j3-fortran.org/doc/year/22/22-176r1.pdf

@gklimowicz
Copy link
Member

Yes. There has been a bit of lobbying for this to promote code portability. Current discussion in JoR is that it might be treated as a new "Part 2" to the standard, but a mandatory, not optional, "companion processor". There may be a separate subgroup established to focus on this and make substantive progress between meetings. Stay tuned.

@marshallward
Copy link
Contributor

I have encountered two specific preprocessing issues where standardization might help. Both are connected to C interop, where the symbols are ambiguous (for example, small differences in glibc on Linux and BSD's libc) and must be determined at compile time.

We use autoconf and CPP macros to assign the internal name of these symbols, but bind(c, name=...) needs these to be defined as strings. We would like to use stringizing to manage the quote delimiters and related issues, but it is unavailable under traditional CPP mode.

Currently we assign the macros as actual strings, e.g. -DFUNC_NAME=\"local_name\" rather than just -DFUNC_NAME=local_name which is tolerable but still a nuisance and difficult to communicate to some users. And if the strings get any more complicated, we might find the approach untenable.

A second case is that local cpp programs typically assigns platform-specific flags, like __linux__ or __amd64__ whereas the CPP invoked by the Fortran compiler rarely sets these. I doubt that standardization can help much here, and things like // comments make it perhaps impossible to fully sync cpp with Fortran, but it would be very helpful to bring these tools as close together as possible.

Although we don't like to rely on these sort of platform-specific flags, they help to provide useful defaults when autoconf is unavailable or when using a legacy build system.

I already saw stringizing in @klausler's document, so no new information here, but I thought that a specific example might help support the effort.

@w6ws
Copy link

w6ws commented Jul 26, 2022

The link in the original post to the preprocessing document is broken. The current link appears to be:

https://github.com/llvm/llvm-project/blob/main/flang/docs/Preprocessing.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests