-
Notifications
You must be signed in to change notification settings - Fork 591
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Maven release #96
Comments
We should be releasing a new version by next week. The problem isn't versioning, the problem is the lack of continuous integration. Would you like to contribute in this way? @vb216 has started to work on that already, but only for Linux... |
Sure, can you point me to what's already been done? Also, I'm a bit lost... AFAIK JavaCPP doesn't have a native part itself. Or do you mean for tests? Also, if nothing has been done, do you have any preference as to where run/build? I know CircleCI has OSX, although it's not free I think. Not sure about Windows, though... |
The JavaCPP Presets do have native libraries. The whole point of JavaCPP are the presets. If we didn't have this, you might as well use SWIG. @vb216 is currently setting up Jenkins, but it's proving to be a bit more challenging than first anticipated. If you could try an alternative and report on how easy or not it was to setup, that would be great. It doesn't need to be free. I can get a little bit of a budget for that. |
I can't say I agree with that part :D. My current usage is for a lib I need to use from the JVM, and for which SWIG I'm sure will have troubles (because it needs parsing the header, etc), so for me JavaCPP is working very well for this purpose. But then, just to be clear, you need CI for JavaCPP (easy) AND JavaCPP-Presets (not that easy), right? If you have a few servers (and I'm guessing you do, if you're setting up Jenkins) I prefer & are well versed in Bamboo (disclaimer: I work for Atlassian, but I still think Bamboo is way superior. For an OSS is free as well, but you still need a place to put it), but I think that for what you need, some cloud solution might prove enough except for the part where you want to set up Windows builds, of course... Is JavaCPP cross-compilation friendly (Clang or GCC), or does it really need VisualCPP for Windows? Do you have a preference of Jenkins/Bamboo/random Cloud CI? |
I've done a bit of quick research, and it seems there is a way of running it for Windows as well in the cloud. So basically, with CircleCI/Travis for Linux and OSX, and Appveyor for Windows, we can cover all platforms (and it's still free, although I know that you said budget can be obtained). I'll start a fork and will add support for this stack, to see how everything works. Do JavaCPP have tests that will benefit from running on all platforms, or there is no point? |
Right, but it's because of the presets that it works. SWIG doesn't do that. They put something together but never actually test it with anything useful. Yes, we need Visual Studio for CUDA, FlyCapture, and others like that. We also need Mac OS X. But for now if you could get something running say on Windows, that'd be great. I have no preferences w.r.t to the tools. When we get something working, we'll move it somewhere on the cloud, yes, that's the idea. I should be able to get a bit of funding for that. The presets builds are basically the tests that need to run. |
In any case, thanks a lot for your interest and let me know if you encounter any problems! Thanks |
Branch for JavaCPP here: https://github.com/Arcnor/javacpp/tree/ci Current builds:
The Windows machines are building but tests are failing ( Hopefully I'll have a bit more time today or tomorrow to fix the Windows build and then start with JavaCPP presets, although I'd like to move to CircleCI (it has tests integration, while Travis does not, but it will depend on the time I have 😄 ) |
Nice! Don't feel pressured to do everything in a week though :) |
I see that you released a new version already, awesome! I guess we can open a new issue for the CI or just continue here, I don't have a strong opinion on that 😄 JavaCPP is now built in all the platforms I mentioned in my last message without errors, which is nice. If you want you can signup for both services, enable the If you prefer the git commit history to be cleaner and want me to squash commits or things like that, just let me know, although that will take me a while longer 😄 I've also started to compile the presets on my fork as well, same |
Yah, new release, but users don't test stuff until it's released, and then they send a boat load of bug reports: Too late guys. It's released. So that's why we need CI :) If you want to open another issue, that's fine. There's also a mailing list if you prefer that for discussions: https://groups.google.com/forum/#!forum/javacpp-project Your choice. So, you're making progress, great! What do your changes look like? What we need built are the presets obviously. The core module of JavaCPP is the easy part... |
You can check my current changes on my branch: https://github.com/Arcnor/javacpp-presets/tree/ci Right now, there are some failures for different things: LLVM can't be compiled because it takes more than 50 minutes, and CUDA because a package needs to be installed, for example. The rest I can fix, I think, I just ran out of time yesterday. |
Oh, cool :) Thanks |
Hi, any updates with Travis CI? Does it look like it's possible to compile all of the presets with it after all? (BTW, for things like LLVM, it might be possible to build it in chunks..) If not, please let me know. We need to evaluate our options, so it's good to know what works and what doesn't! Thanks for testing all that BTW, @jjYBdx4IL is another guy that showed interest in getting that stuff up and running, but gave up I think. It's not as easy as it seems to do. Anyway, if there's anything that needs to be modified in JavaCPP or the presets, but you don't know how, let me know! I'll be happy to fix it up. |
Hi, I've been slowly fixing stuff. You can always check my progress on the same link (https://travis-ci.org/Arcnor/javacpp-presets and my repo branch). Last build was on the weekend IIRC. I've been modifying the scripts a little, mainly removing excessive logging. Major problems right now are:
All in all, it's looking great IMHO :). Right now we have 22 of 36 successful builds, and some of them (like tensorflow) are "easy" to fix. The others will need some thinking. |
Great! Thanks for the update :) |
Hey, how is everything going? Is there anything that you are having problems solving? I understand there's a lot of things that we need to get working, but even if we don't get everything working, since they are free services, it gives us at least something :) CUDA can be installed from repos on Linux as well, should make things easier there. But it will still need a library that only comes with the driver ATM. If we use the "--cudart=static" flag for NVCC it might fix this missing library issue both at build time and at runtime on systems that don't have a GPU: bytedeco/javacpp-presets#219 If you have the time to check this out, it would be great. BTW, the Linux versions should be built in CentOS. Does Travis CI offer that distro? If not we can use Docker, seems the way to go: https://djw8605.github.io/2016/05/03/building-centos-packages-on-travisci/ |
Hey, Sorry, I haven't been able to do much on this recently, have been swamped with other work. Next week I'm planning to continue using/working with JavaCPP, so I'll try to continue on this. Regarding CUDA, cool that we can do something about it, although that was the last one I was going to try fixing, as it looks the most complex :). Regarding the distro, right now it uses TravisCI default (Ubuntu). Using Docker is possible, but there are some limitations when doing that, and I need to check if my current approach works with them. |
Hey, just adding a quick comment to say that I'm still planning to go with what I said, but I didn't get around to get back to my work using JavaCPP because other things took priority. Will try to get back to this ASAP :) |
Sounds great, thanks! |
Hi there, it's been a long time :). From time to time I want to continue on this, but I still haven't find the time at work to continue on my javacpp related project, which limits my options in that regard :) In any case, I was thinking that maybe we should just enable CI for the presets that work right now? That way, you'll get less overhead anyway when you need a new version, we don't lose all the work I did and we can fix the ones that doesn't work later. I'm rerunning the build to see if Travis still works with my changes, and to see which ones are "usable". I also need to merge your latest changes, but one step at a time :). What do you think? |
So the working projects on "all" platforms (Linux x86/x64 & OSX x64) are:
Working on Linux x64/OSX x64:
This is all in 1.2, but I think it's a great start. I'll rebase on top of the current master and retry, but this is just so you can get an idea of what we can build right now that might be worth adding to the official repo, and then we fix the rest later. |
Yes, it's a good start, thanks! BTW, @vb216 has been working in parallel using Jenkins, some results here: http://bytedeco.org/builds/ This is also still a WIP though. Right now it's running on AWS for Linux and Windows, but there is no support for Mac OS X there... Given that Travis CI doesn't do Windows, and AWS doesn't do Mac, I'm starting to think of getting a Mac mini at https://www.macstadium.com/ or something, with Linux and Windows running in virtual machines on the same physical machine. It would make it a lot easier to get CI running that way instead of having to deal with security all over the place between different cloud providers. We'd still need something else for Power, but that would be about it. @Arcnor @vb216 Thoughts? |
Well, the big problem for me is time.. It's been hard (/time consuming) enough just getting everything to build in one go - to me the point of a CI env should be to get regular builds out or test to changes, but it feels like I'm usually behind the curve playing catchup. Platform wise, the jenkins setup is pretty flexible, adding new build nodes is just a case of having them connect via VPN to the main AWS instance. I've just tried moving the arm build from docker/cross-tools to native on a Pi - another case where its been taking more and more time just to get dependencies and libraries working, but alot easier on the native device. Similar approach for Mac at the moment. So, I guess the key question would be, what's the main end goal, and does re-working the solution architecture move closer towards it. If running it all off of a mac instance there means it could be opened up for more people to resolve build issues, that would be a definite bonus. |
When you say, build system for users, do you mean, users can interact with the GUI i.e. kick off jobs? Or just see the progress / output of last builds and have snapshots available from Maven? If its the latter, that's already working in the jenkins->bytedeco website output and the uploaded builds. If its the former, that does add more complexity and management effort. Jenkins itself can be as secure as needed I suspect (as demonstrated with other projects using publicly accessible versions), but, they presumably have a team supporting that effort. I'm no Jenkins advocate especially, and I suspect that would apply to any CI choice if its going to be open to the public and not be routinely patched, configs updated, intrusion detection logs checked etc. You could maybe have a approach of, lets just go for it, if it gets hacked no problem and impact could be reduced down to maven credentials being compromised (and the usual, servers end up being smtp relays, or join a bot net). Or, if you want public GUI interaction, you could maybe run a public CI in a VM/docker and push the data from the private CI to public to reduce down some of the risks. I'm definitely no CI expert though, just basing this on lessons learnt the hard way already! |
Ideally, the former, that's what a build system is for! And in this case especially being able to get builds for all platforms, regardless of the developer's machine. But sure we can start with the latter and only expose final outputs. Because like you say, the lack of time is one factor. However, for the same reason, I feel it takes more time to maintain multiple machines form multiple cloud vendors, than just one, doesn't it? |
Well, I disagree a little that a build system is primarily for public build requests.. I had a quick look just to sanity check, and its hard to find any Jenkins CI that are setup for the public to start a build. I'd imagine the normal start on that would be either a new commit (giving the benefit of testing that commit, and making it instantly available), a scheduled run, or a developer initiated build. But, I figure, why not. I've taken a backup, and it's now publicly accessible here: http://javacv.hopto.org:8085/ If you really want anonymous build requests, its one more click to enable that. The time consuming bit with multiple build machines isn't so much their location or vendor base, its updates/changes to keep them able to build the latest commit either on this repo, or from something upstream that is pulling its latest version. I can't see a way around that, but I can see extra complexity in trying to do it all on one box (with each vm/docker needing 2gb+ to run, that's a sizeable footprint too). Having said that, getting a mac from the place you mentioned earlier could be good, to move more into the cloud. Currently I've got mac and arm platforms here on a separate subnet firewalled off, so it would be one less security headache on my end :) |
I'm sorry, I didn't mean that a build system should be available to anonymous users or anything, but that is should be usable by the community. That's what Travis CI is, and that's what Koji is (see https://fedoraproject.org/wiki/Using_the_Koji_build_system ), although that's not what Jenkins is, it's only a tool. It can be part of the solution, and what you are doing right now can actually be used by our community. The workflow is a bit convoluted, but it works! For example, a developer on Windows fixes something with one of the presets, but needs to provide updated Mac binaries for one of their users, so:
So, I do hope you can keep working on that, hopefully with help from others (@Arcnor?). Maintaining the boxes is obviously time consuming. If we had a lot of of members to take care of this it wouldn't be an issue, but the less boxes there are to maintain, the less time anyone has to spend on that, right? |
Hm I'm not sure I follow. I'd argue they're all just tools.. I'm guessing the difference you have in mind is how the build is constructed, with a more dynamic provision of build envs in other CIs, versus Jenkins sending build jobs to existing server instances (?). Conceptually, I prefer the look of Travis and similar, but I think its often the case than when bespoke requirements increase, you need more tailoring. With the Jenkins approach, if you can establish an ssh connection back to the build master, then you can build on that node. That's why adding things like powerpc took only a few days to add in, but I'm struggling to see how that could be achieved with the more cloud based solutions? But, as we've said, the difficulty and time consumption is in having to maintain all these OS instances. But, that has plus sides too - where you've needed specific versions of libs for the build, there's the flexibility to set them up, which might be more challenging if there's only one image type available from another CI provider. Despite how it might sound, I'm really not advocating one over the other, I'm actually quite interested in how best in practise to tackle this kind of problem. If I had more time, would be quite interesting to working on Travis. Related to that, and your point about reducing boxes to maintain, I guess the approach there is cross-compiling or dockers, and remove the dependency to build natively powerpc? I could see that blocking alot of CI solutions? Looking at your workflow, there's not many changes needed for that in the Jenkins setup, just switch it build on new commits (which makes sense, to make latest snapshots available with changes). Meanwhile, any changes to build more out of dockers would help both in the Jenkins solution and possibly for any other CI choice too I think. Looks like I should scrap the native pi arm build and go back to trying to get the docker or cross compile working again.. |
Reasonable progress, centos docker approach largely working for x86_64 Big problem is the 50 min build time limit, llvm and tensorflow are hitting it. Is there anything in default tensorflow build that isn't used and could be disabled? Seems to build around 2250 out of 2500 targets at the 50 min mark. Only other options I could think of is some prebuild step that supplies a chunk of prebuilt code, but thats not great for making a clean build. Or, it looks like their paid for options have 120 min time limit rather than 50 min (tho I'm not 100% sure what the 120 mins is limiting just yet, I assume individual build steps like the 50 min limit) Time limit will get harder to meet for other platform targets as may need to install things like cuda, for centos7 nvidia supply a prebuilt docker. |
@vb216 thanks for looking into this! looks like the build time limit is going to be a problem: travis-ci/travis-ci#2736 CUDA kernels are what takes most time to compile in TensorFlow, but even if we cut things down now, having to deal with in the long term isn't appealing. @Arcnor would you have ideas? |
FWIW, the 'bootstrap' plan might solve the issue, but its 69usd a month. Seems on the paid for platform, the 120 min time limit applies to the job tasks, and as arcnor split them out, the longest running should complete within that time. I suspect with builds needing more customisation before the real build starts, we'd be even more north of 50 mins (but probably under 120). They do offer a trial, for first 100 builds, if the paid for option was ever likely to be a long term option, but otherwise best not to waste effort. Alternatively, what about circle CI? They seem to have a bit more flexibility for open source projects https://circleci.com/pricing/#faq-section-os-x but I have even less experience of those than of Travis.. |
Dropped Travis support an email just to check if there's any flexibility on the 50 mins when its a clear compile time issue for an open source project. |
Ok, thanks! I think there is no flexibility though. Even when paying it
sounds like 120 minutes is a hard limit we're likely to hit eventually.
Anyway, at that price, might as well pay for a Mac instance...
|
Good progress I think.. https://travis-ci.org/vb216/javacpp-presets/builds/212507132 as an example of recent build, centos 64 bit docker all built fine, mac build not bad, missing cudnn (not sure how to tackle downloading this into a clean vm, as you need to sign in with developer account to access), and caffe not finding libquadmath.a (rings a bell I think..?) Windows64 build here https://ci.appveyor.com/project/vb216/javacpp-presets/build/47, similar issue needing cudnn downloaded, libdc1394 needing something, but apart from that looking pretty good. So, tricky one is cudnn dependency. After that, would need to think what should be snapshot deployed - as these are built out of sequence in parallel, whether a deploy at the end of each step would enable a platform build (the .m2 dir is cached between builds). |
Yes, looking pretty good. Thanks! It's not just cuDNN though, there's also FlyCapture and I'm also working on MKL, among others. They wouldn't offer some sort of private area where we could put things required for build, but that wouldn't be available for download? |
I wondered about google drive / dropbox, with a secure download? That way, its not publically downloadable, the access token can be encrypted into the build file, and from a quick read of the cudnn license, it doesn't seem to immediately break any conditions. I assume their main concern is around having control of code if it gets used in someway they're not happy with, so I can't see a locked down build agent coursing much concern (?). Only other idea is to use their cache, that gets stored on encrypted AWS S3 somewhere, and only accessible by the build tree. But, a) there's a size limit and binaries could eat that up quick, b) drive/dropbox seems more under easy control, and with the access token/password just becomes another download link from the builds perspective |
This? https://docs.travis-ci.com/user/environment-variables/#Encrypting-environment-variables Yes that sounds like a great idea! |
Yeah that's the one. Need to set it up for maven deploy credentials anyway
so hopefully same technique will solve the downloads issue. Will give it a
try
…On 19 Mar 2017 11:35 a.m., "Samuel Audet" ***@***.***> wrote:
This? https://docs.travis-ci.com/user/environment-variables/#
Encrypting-environment-variables Yes that sounds like a great idea!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#96 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AMYRJ3R3hK_oahjZPgw2w8maC0CpWK5Bks5rnRLlgaJpZM4Idq_Z>
.
|
Encrypted variables works nicely! https://travis-ci.org/vb216/javacpp-presets/builds/213059693 has cuda building fine for mac now. Trying to do the same for flycapture but needs a bit of tidy up to recreate the deb contents in usable way for centos (unless theres an easier way around that!). Any thoughts on solving this one: I remember quadmath cropping up before but I thought the brew install gcc-5 fixed it, seems something else missing now? With that cleared out, the mac build is nearly done too. |
Nice!
libquadmath.a, that comes with gfortran, have you also installed that?
|
seems gfortran stuff wrapped up into gcc brew, but something strange on the paths. added a symlink and works fine. So, actually pretty close. Windows 64 pretty much all there (https://ci.appveyor.com/project/vb216/javacpp-presets/build/109), centos64 and mac64 also all done apart from tensorflow on mac, throws a genuine error by looks of it. Just adding in flycapture for all of those and then I think we have a clean sweep. Shouldn't be too hard from there adding in the 32 bit versions and arm/ppc. |
Wow, impressive! For FlyCapture, are you thinking of archiving the necessary files and have that downloaded and extracted, a bit like cuDNN? |
Yeah, had to do that with the linux packages already as they're setup for ubuntu not centos.. Sometimes the installers have nice /quiet type options in windows but otherwise I've had to zip it up and move to program files (think that was libdc1394 problem). Hopefully will get a chance to look at adding android in next. Don't know when/if you want to apply for accounts for the main repo and phase the rollout, or try to go big bang. The appveyor win64+32 builds could be done and dusted pretty easily I think, and is self contained in its own .appveyor config, so that could be a a candidate to go in sooner. |
For sure, start a pull request for that, and feel free to start deploying
SNAPSHOTs too. Sounds great!!
|
OK cool, lets tick off the windows build piece then. Just updating to run against the latest commits. Will try adding in a deploy step and then we should be done! Could you request an appveyor account using the main repo credentials? It looks like there's a way to then add members in as roles https://www.appveyor.com/docs/team-setup/ - from what I understand each repo has different encryption key, so I'd need to generate new keys for dropbox and deploy to work. |
Closing this since thanks to @vb216 AppVeyor and Travis CI are now working pretty well! BTW, each presets can now pick up what it needs to build from other presets all from resources in Maven artifacts: bytedeco/javacpp-presets@778b586 While we're at it, let's list the modules in .appveyor.yml and .travis.yml in the same order as in the parent pom.xml file: https://github.com/bytedeco/javacpp-presets/blob/master/pom.xml#L39 |
Only problem there is you'd need to limit Travis to not run in parallel,
currently it starts 6 jobs in one go to speed up the build. I think with
that, it's pretty likely that say job number 5 might depend on job number
3, and so fail (or worse, seem to succeed because there's an old artifact
cached in .m2)
If you're happy to take the hit on running one job at the time, should work
fine
…On 7 May 2017 at 15:15, Samuel Audet ***@***.***> wrote:
Closing this since thanks to @vb216 <https://github.com/vb216> AppVeyor
and Travis CI are now working pretty well!
BTW, each presets can now pick up what it needs to build from other
presets all from resources in Maven artifacts: bytedeco/javacpp-presets@
778b586
<bytedeco/javacpp-presets@778b586>
For that, we have to set the copyResources parameter, for example, via
its property with mvn install -Djavacpp.copyResources .... Then, on the
CI servers, we just have to make sure that ~/.m2 gets copied to the cache
and that dependencies get built in order.
While we're at it, let's list the modules in .appveyor.yml and .travis.yml
in the same order as in the parent pom.xml file:
https://github.com/bytedeco/javacpp-presets/blob/master/pom.xml#L39
Sounds good @vb216 <https://github.com/vb216> ?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#96 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AMYRJ-iBWMOIDGXTlFjmGNKrOKrKTto-ks5r3dHygaJpZM4Idq_Z>
.
|
Right, looks like a feature they'll implement eventually: travis-ci/travis-ci#5790 |
Yeah gets my vote. From a first look, it might not shift the build time out
too far. Could you update the main repo with that new setting (concurrent
builds to 1), and just check that you've got CI_DEPLOY_PASSWORD and
CI_DEPLOY_USERNAME defined as secret variables in the project settings too?
Will be a PR before long with deploy v snapshot depending on whether its a
pull or not, along with quite a few other improvements
…On 7 May 2017 at 15:48, Samuel Audet ***@***.***> wrote:
Right, looks like a feature they'll implement eventually:
travis-ci/travis-ci#5790
<travis-ci/travis-ci#5790>
So, it'll get there eventually, but for now serial might be the lesser
evil?
I wonder if there's a way to define multiple independent matrices...
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#96 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AMYRJ_6W-E17NijZEXKVx7j10MtxuAloks5r3dnBgaJpZM4Idq_Z>
.
|
The variables are there, yes.
I'll leave the parallel build execution on and see what that gives before
disabling it though. When builds fail it's always possible to restart them
:)
|
Ah, not looking good. With a build matrix, each task has its own cache, so
they never see the m2 additions made by previous tasks:
travis-ci/travis-ci#7590
You'd either need to upload as artifacts to S2 (brings back problem of
credentials across different repos, but feasible
https://docs.travis-ci.com/user/uploading-artifacts/), though then you have
to manage downloading them too..
Or, enable snapshots in m2 settings, which might be ok when doing mvn
deploy as previous steps would have updated it, but wouldn't work for PRs
if the pr contained modifications to more than one project.
I dont see many drawbacks to the previous way, apart from opencv and the
deps on it were taking a while to build (and the nvcc stuff would totally
blow out the build time then), compared to the complexity around the other
options. Really just feels like something where a nice cloud NFS solution
could solve the m2 and download headaches!
On 8 May 2017 11:14, "Samuel Audet" <notifications@github.com> wrote:
The variables are there, yes.
I'll leave the parallel build execution on and see what that gives before
disabling it though. When builds fail it's always possible to restart them
:)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#96 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AMYRJ73mxFt0RTHVu2cW_NJgKuj0DPKpks5r3usGgaJpZM4Idq_Z>
.
|
Actually, maybe scp is the answer? Would need some remote NAS box / mini
AWS instance with v low CPU but few GB disk, Travis and appveyor support
encrypted files so encrypt the ssh key, then all done. Secure downloads
become alot easier to do and manage, builds could have their output sent
there with unique builder number as to not pick up stale jars, and it's
simple tried and tested technology..
On 8 May 2017 17:53, wrote:
Ah, not looking good. With a build matrix, each task has its own cache, so
they never see the m2 additions made by previous tasks:
travis-ci/travis-ci#7590
You'd either need to upload as artifacts to S2 (brings back problem of
credentials across different repos, but feasible
https://docs.travis-ci.com/user/uploading-artifacts/), though then you have
to manage downloading them too..
Or, enable snapshots in m2 settings, which might be ok when doing mvn
deploy as previous steps would have updated it, but wouldn't work for PRs
if the pr contained modifications to more than one project.
I dont see many drawbacks to the previous way, apart from opencv and the
deps on it were taking a while to build (and the nvcc stuff would totally
blow out the build time then), compared to the complexity around the other
options. Really just feels like something where a nice cloud NFS solution
could solve the m2 and download headaches!
On 8 May 2017 11:14, "Samuel Audet" <notifications@github.com> wrote:
The variables are there, yes.
I'll leave the parallel build execution on and see what that gives before
disabling it though. When builds fail it's always possible to restart them
:)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#96 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AMYRJ73mxFt0RTHVu2cW_NJgKuj0DPKpks5r3usGgaJpZM4Idq_Z>
.
|
Hum, that's too bad. I think SNAPSHOT would be good enough for now. We
could eventually run our own Nexus server to have more flexibility too. It
doesn't seem to me that scp would offer more?
|
Had been looking into any other options, AWS S3 could have been a good
option, would support running a sync before and after jobs to build up the
.m2 directory on there, works nicely via their CLI if you define a couple
of credentials. But.. we're back to the credentials issue, that PRs seem to
disable secret vars. I can't see a way around that (though it is possible
to leave them clear and lock down what they can do - for S3 its a
read/write to a given folder); if you want builds setup where they are
dependent on output of a previous job, but we can't share the cache, then
it has to be loaded somewhere, and somewhere else would need some form of
login..
I think that issue applies to snapshot too if that's what you meant for
PRs. I could imagine a situation, where opencv has been extended in a PR,
to support something for say tensorflow, which has also been changed.
Opencv would build in the PR, and whilst we'd see it build OK, the output
wouldn't persist anywhere (apart from potentially in the opencv cache from
the matrix setup). So when tensorflow comes to build, whilst it could pull
in the latest snapshot jars, they wouldn't include the mods made to opencv
as part of this PR..
That would seem a fairly common use case for a PR, and I can't see a new
way around it yet. Having the longer opencv and deps setup in the matrix
was one way around it. And we can't call the whole lot with just a mvn
install because the build time would be many hours.
I'll try posting an issue on Travis, I can't believe this is a unique
scenario for just us
…On 8 May 2017 at 17:55, Vin Baines ***@***.***> wrote:
Ah, not looking good. With a build matrix, each task has its own cache, so
they never see the m2 additions made by previous tasks:
travis-ci/travis-ci#7590
You'd either need to upload as artifacts to S2 (brings back problem of
credentials across different repos, but feasible
https://docs.travis-ci.com/user/uploading-artifacts/), though then you
have to manage downloading them too..
Or, enable snapshots in m2 settings, which might be ok when doing mvn
deploy as previous steps would have updated it, but wouldn't work for PRs
if the pr contained modifications to more than one project.
I dont see many drawbacks to the previous way, apart from opencv and the
deps on it were taking a while to build (and the nvcc stuff would totally
blow out the build time then), compared to the complexity around the other
options. Really just feels like something where a nice cloud NFS solution
could solve the m2 and download headaches!
On 8 May 2017 11:14, "Samuel Audet" ***@***.***> wrote:
The variables are there, yes.
I'll leave the parallel build execution on and see what that gives before
disabling it though. When builds fail it's always possible to restart them
:)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#96 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AMYRJ73mxFt0RTHVu2cW_NJgKuj0DPKpks5r3usGgaJpZM4Idq_Z>
.
|
Looks like there will be a solution on Travis sometime soon, but not sure about Appveyor. I'd suggest in the meantime, maybe we keep use multiple project deps on a single matrix item (as per current in repo) as that get us working builds at least? I've been working on a PR that has got some curl fixes, shifts to google drive rather than dropbox, has the project order tidied up (and so extra ones that have come in like mkl), android build, and creates snapshots for main repo builds v install for PRs.. would be good to get that lot in and wait for a fuller fix for this issue? *Which, might be putting public access allowable to an S3 folder, but only allowing Travis/Appveyor IP to access that location, and pop the build dependencies in there too, as well as building up a 'master' .m2 dir as all the jobs run https://docs.travis-ci.com/user/ip-addresses/ - that way we get rid of any authentication needs for PR building.. |
Cool! In the case of AppVeyor, since it doesn't build projects in parallel, maybe there's already just one cache for all builds in a matrix? As for using Google Drive to store bundles, it looks like it might not be that simple. It seems to try really hard to prevent us from getting a direct link to large files, but if you got it working well that's great. Allowing public access from specific IPs, that's something we can do with a repository like Nexus on some cloud instance too... Anyway, yes, if you have other updates, let's do that first by all means! |
Will it be possible to do a new release of JavaCPP? Last one was on Oct, 2015 and I was hoping to use the latest changes for my project, but I'm inside an environment that doesn't allow to use SNAPSHOT of libraries (so I can't use Sonatype Snapshot).
Also, have you thought about moving to semantic versioning? (http://semver.org/)
Thanks
The text was updated successfully, but these errors were encountered: