-
Notifications
You must be signed in to change notification settings - Fork 271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Let VUnit execute simulators which are inside a container or VM, from outside #324
Comments
@kraigher , I've checked bsub and, although similar, it is overkill. Because it is meant for HPC with multiple physical devices, an additional orchestration service (LSF) must be run. My proposal is more straighforward because all the containers are expected to be executed in the same machine, so no additional service is required. Well, docker is the service, somehow, as
I want to achieve 2, 3 or 4 with VUnit. I will focus on 4, as I think it is the less invasive approach from VUnit's perspective:
That's quite easy to achieve, as long as all the sources and products are kept in the single folder that is shared between VUnit and the container with GHDL. However, VUnit requires additional external sources to be built. E.g., for example/vhdl/array_axis_vcs more than 100 files from
Unless you tell me not to do so, I'd go with the first option, because copying the same sources to every project makes little sense to me. As a result, the command 'decoration' is required to not only prepend and append some args, but also to rewrite the two paths that might appear in Interception/decoration scriptShall all be done without any modification to VUnit, this can be a local prototype of the interception script: $ mv /usr/local/bin/ghdl /usr/local/bin/ghdlbin
$ vim /usr/local/bin/ghdl
#!/bin/sh
/usr/local/bin/ghdlback $@
:wq In order to see the exact commands that are executed: #!/bin/sh
if [ "$1" != "--version" ]; then
echo "$@"
fi
/usr/local/bin/ghdlback $@ and we get: -a --workdir=/work/vunit_out/ghdl/libraries/vunit_lib --work=vunit_lib --std=08
-P/work/vunit_out/ghdl/libraries/vunit_lib -P/work/vunit_out/ghdl/libraries/osvv
m -P/work/vunit_out/ghdl/libraries/lib /usr/local/lib/python3.6/site-packages/vu
nit_hdl-3.0.3rc0-py3.6.egg/vunit/vhdl/check/src/check.vhd
Compiling into vunit_lib: ../usr/local/lib/python3.6/site-packages/vunit_hdl-3.0
.3rc0-py3.6.egg/vunit/vhdl/array/src/array_pkg.vhd
passed
-a --workdir=/work/vunit_out/ghdl/libraries/vunit_lib --work=vunit_lib --std=08
-P/work/vunit_out/ghdl/libraries/vunit_lib -P/work/vunit_out/ghdl/libraries/osvv
m -P/work/vunit_out/ghdl/libraries/lib /usr/local/lib/python3.6/site-packages/vu
nit_hdl-3.0.3rc0-py3.6.egg/vunit/vhdl/array/src/array_pkg.vhd
Compiling into lib: src/test/tb_axis_loop.vhd
passed
--elab-run --std=08 --work=lib --workdir=/work/vunit_out/ghdl/libraries/lib -P/w
ork/vunit_out/ghdl/libraries/vunit_lib -P/work/vunit_out/ghdl/libraries/osvvm -P
/work/vunit_out/ghdl/libraries/lib tb_axis_loop tb -gtb_path=/work/src/test/ -gr
unner_cfg=active python runner : true,enabled_test_cases : test,output path : /w
ork/vunit_out/test_output/lib.tb_axis_loop.test_260bb5c8c675e898eca5dc9024a4420e
de12c0bc/,tb path : /work/src/test/,use_color : true --assert-level=error that we can rewrite as: -a --workdir=${WORKDIR}/vunit_out/ghdl/libraries/vunit_lib --work=vunit_lib --std=08
-P${WORKDIR}/vunit_out/vunit_lib -P${WORKDIR}/vunit_out/ghdl/libraries/osvv
m -P${WORKDIR}/vunit_out/lib ${VUDIR}/vhdl/check/src/check.vhd
Compiling into vunit_lib: ..${VUDIR}/vhdl/array/src/array_pkg.vhd
passed
-a --workdir=${WORKDIR}/vunit_out/ghdl/libraries/vunit_lib --work=vunit_lib --std=08
-P${WORKDIR}/vunit_out/ghdl/libraries/vunit_lib -P${WORKDIR}/vunit_out/ghdl/libraries/osvv
m -P${WORKDIR}/vunit_out/ghdl/libraries/lib ${VUDIR}/vhdl/array/src/array_pkg.vhd
Compiling into lib: src/test/tb_axis_loop.vhd
passed
--elab-run --std=08 --work=lib --workdir=${WORKDIR}/vunit_out/ghdl/libraries/lib -P${WORKDIR}/vunit_out/ghdl/libraries/vunit_lib -P${WORKDIR}/vunit_out/ghdl/libraries/osvvm -P
${WORKDIR}/vunit_out/lib tb_axis_loop tb -gtb_path=${WORKDIR}/src/test/ -gr
unner_cfg=active python runner : true,enabled_test_cases : test,output path : ${WORKDIR}/vunit_out/test_output/lib.tb_axis_loop.test_260bb5c8c675e898eca5dc9024a4420e
de12c0bc/,tb path : ${WORKDIR}/src/test/,use_color : true --assert-level=error where
So we can either hardcode or use regexps to replace those with If that was supported, examples 2, 3 and 4 could be easily achieved replacing I might also have overlooked the docs. If this is already supported, please let me know. Examples: from local to
|
I forgot to mention that Docker provides a Python SDK: https://docs.docker.com/develop/sdk/ It lets you do anything the docker command does, but from within Python apps. So this feature can be embedded into VUnit if wanted, instead of relying on external scripts. |
Spontaneously it seems like a lot of complexity just to avoid installing VUnit in the container. I infer that your problem is that you want to bumb the VUnit version often and it creates the need for many containers. Wouldn't the easiest solution just be to just run VUnit from within the container but have the VUnit repo outside of the container. As long as VUnits only dependency (colorama) is installed in the container you can just set the PYTHONPATH to the VUnit repo and map the repo root to the container to run it without any "install". When I run VUnit i never install it with setup.py or pip I just set the PYTHONPATH in the shell. |
The main point is not to avoid installing VUnit (which is only 6MB, including the docs), but to avoid installing Python:
So, installing python in each container requires four times as much space for the base image. In practice, this is the difference:
Moreover, if external, VUnit itself could be executed in a A different point, which I don't know how relevant might be, is that |
Some random thoughts:
|
This can be easily met.
What about path translation/modification? Can VUnit be instructed to introduce two keywords (prefixes) to make regexp easier?
This can be done by sharing the root of the hard drive with the container. But that's not suggested at all, and it does not make much sense considering the user is using a container or virtual machine. I mean, if the user cannot keep all the files inside a project folder (or copy the external ones before calling VUnit), then I think that using a container or virtual machine is not an option.
There are two requirements:
Then, this wrapper script automatically sets the required environment variables (tested in GNU/Linux and Windows). Indeed, it uses xdpyinfo to test if there is any X server available; if it is not, it checks if it is a MINGW environment (Windows) and prompts the user for confirmation to initialize Xming. See 1138-4EB/hwd-ide/wiki/Desktop#windows. I use this script very frequently (everyday) with QuestaSim docker images and with BTW, this point does nothing to do with this issue. I mean, this is an option right now, and it would also be if any change was made.
I hope to be able to build GHDL for ARM at some point, so thinking about low-cost target devices size matters. However, I agree with you: this is a very specific use case in the VUnit user base.
I cannot help here. I do not. But I am not a Python user, so just touch the minimum. I do not really know what the differences between versions are. |
Path translation is complex. There are a lot of paths everywhere. paths in .tcl files, path given on argv to the simulation. paths given as generics to the test bench from the user. However the recommendation is for the user to create paths in the run.py relative to the project folder or run.py and that the project folder is pointed to relative to the run.py file. Thus running in docker would not need any complex path translation since all paths where created relative to the run.py file. The way I see it you can already achieve what you want by doing to things.
|
That's what I did above. See section 'Interception/decoration script'. After your explanation I suppose that WORKDIR is always going to be the path where run.py is called from, so I can just set an environment variable from run.py. About |
|
Thanks! I'll give it a try tonight! |
But if you do not "install" VUnit using pip or setup.py and just keep it in a folder next to the project you can always map that into docker and set the PYTHONPATH |
Well, everything we commented works as expected: Running However, it seems that python is a runtime dependency for GHDL itself if VUnit is used to run the design (lines 292 and 297 of the log). I suppose it is related to src/vhdl/python. Does VUnit require simulators to provide a python interface or is it used only when available, for some "advanced" features? |
VUnit does not use any Python feature of GHDL. We only call the binary. I have seen GHDL recently add Python API but VUnit has no plans to use that. |
Then, I'll have to investigate. I think that the script might be removing some symbols. This is the command producing the error: ghdl \
--elab-run \
--std=08 \
--work=lib \
--workdir=/work/examples/vhdl/array/vunit_out/ghdl/libraries/lib \
-P/work/examples/vhdl/array/vunit_out/ghdl/libraries/vunit_lib \
-P/work/examples/vhdl/array/vunit_out/ghdl/libraries/osvvm \
-P/work/examples/vhdl/array/vunit_out/ghdl/libraries/lib \
tb_sobel_x tb \
-gtb_path=/work/examples/vhdl/array/src/test/ \
-grunner_cfg=active python runner : true,enabled_test_cases : test_input_file_against_output_file,output path : /work/examples/vhdl/array/vunit_out/test_output/lib.tb_sobel_x.test_input_file_against_output_file_b1de5eca46714be73ca0390ba61a470c08d9bdc0/,tb path : /work/examples/vhdl/array/src/test/,use_color : true --assert-level=error There should be quotes or double quotes in the last line, isn't it? |
Yes it seems like a quoting issue. VUnit does not rely on any shell and sets argv of the subprocess directly. |
Indeed, I had to put the quotes in the ghdl interface of VUnit, because as soon as those are received by a bash script the generics are split. See #327. I hope that this does not imply any problem in other use cases. |
Well, there is no way for a shell to receive unescaped arguments and forward those unchanged. Receiving them inherently splits them. https://blogs.msdn.microsoft.com/twistylittlepassagesallalike/2011/04/23/everyone-quotes-command-line-arguments-the-wrong-way/ In python there are built-in solutions to satinize the output argv, when the target is known to be a shell:
However, this implies that a minimal change needs to be done to VUnit, in order to let it know when should the output be escaped. |
There is no "splitting" of arguments going on between VUnit and your wrapper script. It is only when issuing a command in a shell where a tokenization of the input string into a list of strings (argv) occurs.
|
Your are ok. If we had to just forward clean it might work. But we cannot test that, because paths need to be rewritten. Indeed, the objective of putting an intermediate wrapper is to modify something. As soon as Then, as you say, a different language must be used. Writting an additional program in C or Golang is of little help, because it should be compiled for each platform. Then, since VUnit is written in Python, the most sensible approach is to do it in VUnit:
At this point, it makes sense to do string replacements in the same place:
Moreover, because VUNIT_DIR and PROJECT_DIR are already available in VUnit, I'm guessing how to reuse them, instead of reading the envvars. Indeed, if the replacement is done inside VUnit (which was my very first proposal in this thread), there is no need for escaping: "$@" will work. |
Firstly there is no such thing as a PROJECT_DIR in VUnit that is purely a user side invention if it exists. Secondly I do not like replacing strings with unknown meaning just because they look like some path that in 99% of the cases should be replaced but not in the 1%. Thus this is not something I am keen on integrating into VUnit. If it should be integrated into VUnit it needs to be a 100% semantically correct operation. I am not against adding something to VUnit to support your use case but it needs to have a good separation of concerns and not be a hack. Anyway the best language to write your wrapper script would be Python. You need Python anyway to run VUnit in the host. # Sketch of how it could work
import sys
import subprocess
subprocess.call(["ghdl"] + [fix_and_replace_stuff(arg) for arg in sys.argv]) Maybe you could also use the docker API via python to launch GHDL. In that way there is no shell involved at all and everything is precise except path replacement which is not 100% semantically ok. Regarding path substitution I think you can and should avoid it completely. To avoid it you just need to ensure that the vunit folder and project folder are identical in the docker image and the host. For the project folder that should be doable. For the vunit folder you can just place vunit next to the project folder and set the PYTHONPATH there and it should also be able to look the same to the host and image if the project could. For example if you have a |
Indeed, in the examples above it is the path where
On the one hand. I am not suggesting to include any of the latest changes. I considered this issue to be closable with "Spontaneously it seems like a lot of complexity just to avoid installing VUnit in the container". The conversation afterwards was just to let myself understand how large that complexity is. Moreover, it let both of us realize that a couple of shell scripts do not suffice. On the other hand, I am not confident enough to push any of the python changes, now that I get to understand the implications. For now, I just want to keep a branch in my fork. Considering a possible future integration, these are some options I need to investigate:
Apart from where/how to put the code, these are the functionalities I want to try in Python:
From my previous experience with the Golang SDK, it is just a matter of syntax. There is very little diference between using Also, the main point not to include the SDK is not to make VUnit dependent. I suppose that it is possible to tell VUnit to lazy-load some libraries, but I don't know how to do it (yet). As a general question, what do you think about adding the docker SDK as a dependency? I mean something as general as a
This works. For example: https://travis-ci.org/1138-4EB/vunit/builds/368708071 There, both However, I am on a Windows 10 host, and I am using MINGW to interact with docker. Docker for Windows 10 is a very small Virtual Machine running on Hyper-V. There is so much going on with the paths that it is hard to ensure that the same can be used. Besides, I need to support at least two paths (VUnit can be located in any of them):
Therefore, I need the approach to be flexible enough. If it is not possible, I can always wrap run.py with a script to copy all the external sources to a temporal dir next to it. Indeed, this is what I do now. But I'd rather explore alternatives. |
The way I see it the benefit has to outweigh the cost and complexity so lets analyse them. Benefits:
Costs:
In this cost/benefit analysis I think the costs outweigh the benefit for VUnit. The way I see it 100 Mb larger docker image (25% saving in the best case scenario) is not worth increased complexity in the VUnit code base and presenting a leaky abstraction to the user. To me just installing Python in the image and get everything working without issues looks more attractive. At work we have docker images for simulation using a commercial simulator and the size of Python in those are not significant. The docker image for the simulator is built on top of the docker image with Python which is already needed for the slave machines to have a controlled environment. In that way the Python layer is already cached on the slaves and the simulator layer is the only thing that needs to be downloaded again (Often that is cached as well). |
I updated dockerexec with a Python based solution that requires a minimum modification to the VUnit codebase. See README and travis-ci.org/1138-4EB/vunit/jobs/369877366. The required modification is in vunit/simulator_interface.py: now, simulator executables are only searched in the system environment variable PATH, not in the PYTHONPATH ( Apart from that, an issue raised when trying to use the snippet to get the installation path of VUnit: it seems that the existence of valid simulators is checked when VUnit is imported, neither when a class of the instance is generated nor when main is executed. Therefore, a circular dependency is generated: I cannot add the path to the fake executables located relative to the VUnit installation, because there is no valid simulator in the path. If I could read the installation path without checking for valid simulators, then I would add the path and VUnit would find a valid one. @kraigher, shall I open a separate issue to suggest making the simulator check later/optional?
Benefits:
Costs:
The current approach is to start two containers, one with VUnit and another one with the simulator. Both containers have the same volumes/directories bind to the same paths in the host. Therefore, no path handling/mapping is required inside VUnit, because all the paths available to VUnit are also available to the simulator.
As explained above, now the user is explicitly aware of which paths are used inside the containers, and sources must be written for these. Therefore, not all existing tests can be directly converted.
I think that the current approach levels the balance up. There is no increased complexity in the VUnit code. Indeed, there is no need to merge anything but the modification at the beginning of this comment. Then, shall anyone be interested on splitting VUnit from the simulator, this issue and the referred branch suffice.
AFAIK, there is no straighforward solution to add a docker layer on top of two or more different images, even if those images share a common parent. I.e., there is no equivalent to git merge/rebase in docker. Therefore, installing the same Python and VUnit version with four different simulator containers requires repeating the same process four times. Keeping those images in sync between different machines when there is no private registry available is painful. Allowing a single VUnit docker image to talk to any container produces less update issues.
In my case, the simulators are available in images which do not have Python installed. I.e., I asked about this recently and I was referred to moby/buildkit but it seems to be in early development (yet). |
better to reopen this issue which has all context |
Seems very wierd to try to find simulators in the PYTHONPATH, I would never expect to find simulators there. It is quite easy to set the PATH environment variable within Python.
Also to be explicit about where the simulator is located you can use Anyway regarding the simulator detection on import I can move that to the instantiation of the VUnit class. |
I read that the PYTHONPATH should be modified with So the branch is now edited to use
That was the very first approach, but I'd like VUnit_docker to 'automatically' detect the available fake executable(s) for the user, because I have not added envvar passing to the containers, yet.
That'd be great, as it allows to replace VUnit_docker.py#L20-L37 with:
|
After the recent commits to the master branch, the snippet above is now implemented in dockerexec. Therefore, the prototype is functional and quite clean for my requirements. I might update the branch in the future, to add extra features that might be needed in more complex use cases (e.g., swarms). But, for now, I need to move on. All my questions about VUnit are now answered, and the suggested and accepted changes are already merged. So, I think we can close this issue. Anyway, I'll leave it open, just in case you want to keep it as a "live" reference of what the title describes. @kraigher, thanks so much for your patience, time and effort. Your help, and having this issue solved, let me learn quite a lot about Python and the internals of VUnit. |
Because VUnit requires the simulator to be available in the path, and executable names (vsim, vcom, etc.) are 'hardcoded', VUnit must be installed in the same docker image as the simulator. In a use case, I have two images, one with GHDL and a different one with QuestaSim, and VUnit is installed in both of them. Indeed, because I have versions with and without VUnit, there are four images. Each time VUnit is updated, two of them have to be rebuilt.
My proposal is to make VUnit execute simulators which are inside a container, from outside:
In the latter, I would have three images instead of four, and I would have to rebuild a single one. Note how this would scale with more simulators. Furthermore, if the VUnit docker image was generated in the vunit travis-ci flow, I would consume it as an alternative to
ghdl/ext:vunit
.A workaround would be to 'hack' it at system level. That is, write some man-in-the-middle shell scripts to get VUnit's calls and forward them to containers. However, it is not clean, because it is not straighforward to handle the lifecycle of tests as VUnit does.
The text was updated successfully, but these errors were encountered: