Skip to content
This repository has been archived by the owner on Oct 12, 2023. It is now read-only.

Feature/faq #110

Merged
merged 10 commits into from
Sep 18, 2017
Merged

Feature/faq #110

merged 10 commits into from
Sep 18, 2017

Conversation

paselem
Copy link
Contributor

@paselem paselem commented Sep 8, 2017

Some common troubleshooting issues and FAQs

@msftclas
Copy link

msftclas commented Sep 8, 2017

@paselem,
Thanks for your contribution as a Microsoft full-time employee or intern. You do not need to sign a CLA.
Thanks,
Microsoft Pull Request Bot

getClusterFile(cluster, "tvm-1170471534_2-20170829t072146z", "stderr.txt", downloadPath = "pool-errors.txt")

# Get standard log file
getClusterFile(cluster, "tvm-1170471534_2-20170829t072146z", "stderr.txt", downloadPath = "pool-errors.txt")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be getClusterFile(cluster, "tvm-1170471534_2-20170829t072146z", "stdout.txt", downloadPath = "pool-logs.txt")

```

### My job never starts running. How can I troubleshoot this issue?
This is often caused by the node not being in a good state. Take a look at the state of the nodes in the cluster to see if any of there are and nodes in an error or failed state. If not node is in a startTaskFailed state follow the instructions above. If the node is in an 'unknown' or 'unusable' state you may need to manually reboot the node.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo:
If the node is in a startTaskFailed state, follow the instructions above

```

### My job never starts running. How can I troubleshoot this issue?
This is often caused by the node not being in a good state. Take a look at the state of the nodes in the cluster to see if any of there are and nodes in an error or failed state. If not node is in a startTaskFailed state follow the instructions above. If the node is in an 'unknown' or 'unusable' state you may need to manually reboot the node.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo?
Take a look at the state of the nodes in the cluster to see if any of them have an error or are in failed state?

cluster <- makeCluster('myConfig.json')
...
# Get standard error file
getClusterFile(cluster, "tvm-1170471534_2-20170829t072146z", "stderr.txt", downloadPath = "pool-errors.txt")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just to be super obvious, we should also remind people how to get their node id (in this case: tvm-1170471534_2-20170829t072146z)

```

### My job never starts running. How can I troubleshoot this issue?
This is often caused by the node not being in a good state. Take a look at the state of the nodes in the cluster to see if any of there are and nodes in an error or failed state. If not node is in a startTaskFailed state follow the instructions above. If the node is in an 'unknown' or 'unusable' state you may need to manually reboot the node.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again to be super obvious - let's actually just tell them how to check the state of their nodes


```r
# reboot a node
rAzureBatch::rebootNode('<my_cluster_id>', '<my_node_id>')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think we should add a comment below this line:

# your node_id typically looks something like this 'tvm-1170471534_2-20170829t072146z'

* removing '/usr/lib64/microsoft-r/3.3/lib64/R/library/__PACKAGE__'
```

This issue is due to certain compiler flags not available in the default version of R used by doAzureParallel. In order to get around this issue you can add the following commands to the command line in the cluster configuration to make sure R has the right compiler flags set.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.



### Why do some of my packages install an older version of the package instead of the latest?
Since doAzureParallel uses Microsoft R Open version 3.3 as the default version of R, it will automatically try to pull pacakge from [MRAN](https://mran.microsoft.com/) rather than CRAN. This is a big benefit when wanting to use a constant version of a package but does not always contain references to the latest versions. To use a specific version from CRAN or a different MRAN snapshot date, use the 'commandLine' in the cluster configuration to manually install the packages you need.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

docs/42-faq.md Outdated
By default, doAzureParallel uses Microsoft R Open 3.3.

## Does doAzureParallel support a custom version of R?
No. We are looking into support for different versions of R as well as custom version of R but that is not supported today.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: "...as well as custom versions of R..."

docs/42-faq.md Outdated
## Does doAzureParallel support a custom version of R?
No. We are looking into support for different versions of R as well as custom version of R but that is not supported today.

## How much does doAzureParallel cost?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also just be super explicit and obvious and state "doAzureParallel itself is free to use" .. just because i've gotten this question from people before.

docs/42-faq.md Outdated
doAzureParallel is built on top of the Azure Batch service. You are billed by the minute for each node that is assigned to your cluster. You can find more infomration on Azure Batch pricing [here](https://azure.microsoft.com/en-us/pricing/details/batch/).

## Does doAzureParallel support custom pacakge installations?
Yes. The 'commandLine' feature in the cluster configuration enables running custom commands on each node in the cluster before it is ready to do work. Leverage this mechanism to do any custom installations such as installing custom software or mounting network drives.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also add a link to the customize cluster doc here

Copy link
Contributor

@jiata jiata left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry, but requesting more changes!

### After creating my cluster, my nodes go to a 'startTaskFailed' state. Why?
The most common case for this is that there was an issue with package installation or the custom script failed to run. To troubleshoot this you can simply download the output logs from the node.

The following 2 nodes failed while running the start task:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to be more verbose, but also a little clearer about the scenario:
"
Node ids are prepended with tvm. Lets say that when spinning up your cluster, the following 2 nodes failed while running the start task:

  • tvm-769611554_1-20170912t183413z-p
  • tvm-769611554_2-20170912t183413z-p

Here's how you would get the node ids and the logs from the nodes to debug:
"



```r
cluster <- makeCluster('myConfig.json')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we move this line to line 24? .. we don't use the cluster variable until we actually runt he 'getClusterFile' function

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes BUT this method should print out any known errors. If not, you can use the listPoolNodes() method. Is the way I have it laid out too confusing? I can rethink it, but makeCluster should be the first thing... perhaps it should be a separate troubleshooting item?

```

### My job never starts running. How can I troubleshoot this issue?
This is often caused by the node not being in a good state. Take a look at the state of the nodes in the cluster to see if any of them are have an error or are in a failed state. If the node is in a startTaskFailed state follow the instructions above. If the node is in an 'unknown' or 'unusable' state you may need to manually reboot the node.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How should they take a look at the state of the nodes in the cluster? if they are using the portal, we may as well tell them that they can reboot their node from there too.

Also, i think people will think "if i reboot my node, won't that mess up the deployment?" - we can give them some peace of mind by saying "When rebooting the node, we will run the necessary scripts to get your node setup to work for you doAzureParallel cluster"

docs/42-faq.md Outdated
## How much does doAzureParallel cost?
doAzureParallel itself is free to use and is built on top of the Azure Batch service. You are billed by the minute for each node that is assigned to your cluster. You can find more infomration on Azure Batch pricing [here](https://azure.microsoft.com/en-us/pricing/details/batch/).

## Does doAzureParallel support custom pacakge installations?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pacakge spelling

@paselem paselem merged commit 8ae4fcd into master Sep 18, 2017
@paselem paselem deleted the feature/faq branch September 18, 2017 15:44
brnleehng added a commit that referenced this pull request Sep 29, 2017
* treat warnings as failures and fail the creation of the cluster (#91)

* treat warnings as failures and fail the creation of the cluster

* fix unit tests

* fix lintr lines too long issue

* escape single quotes

* Check if existing pool is deleted when makeCluster is called (#99)

* Added deleting pool check for makeCluster

* Fixed double quotes

* cluster logs renamed from pool to cluster

* Added correct imports and fix range

* Feature/bio conductor docs (#106)

* initial command line instructions for bioconductor

* initial startup scripts for installing bioconductor

* fix if then syntax

* force update node environment with update path for R runtime

* install bioconductor

* wrap bioconductor install command in Rscript

* bioconductor sample docs

* update bioC docs

* remove .gitignore rule for .json files

* add pointer to BioC cluster config from docs

* Feature/cluster logs (#98)

* download merge result gets content raw

* Added setHttpTraffic and logging functions docs

* Fixed broken links

* Shorten lines down to 120 characters

* download merge result gets content raw

* Added setHttpTraffic and logging functions docs

* Fixed broken links

* Shorten lines down to 120 characters

* Renamed function names from past discussion

* Fixed log documentation

* Added new operations for storage management

* Added dont run examples

* Fixed unused arg for running example

* Updated docs for storage management

* Added a new doc dedicated for managing storage

* Added attribute for container name in data frame

* Fixed downloadBlob to work with new rAzureBatch function

* Updated docs based on PR comments

* Changed dependency version to razurebatch 0.5.0

* Feature/add azure files cluster config (#108)

* initial command line instructions for bioconductor

* initial startup scripts for installing bioconductor

* fix if then syntax

* force update node environment with update path for R runtime

* install bioconductor

* wrap bioconductor install command in Rscript

* bioconductor sample docs

* update bioC docs

* remove .gitignore rule for .json files

* add pointer to BioC cluster config from docs

* add missing azureFiles cluster config to samples

* Add 0.4.2 CHANGELOG comments (#111)

* Added live scenario test (#107)

* Added live scenario test so users do not have to write their own sample code to test

* Added file names for test live

* Removed single quote linter

* Added comment about the reason for this test

* Wait for job preparation task function (#109)

* Fixed verbose for getDoParWorkers (#112)

* Feature/faq (#110)

* initial FAQ

* rename faq to FAQ

* merge FAQ and Troubleshooting docs

* add info on how to reboot a node

* refrence TSG and FAQ from main docs index page

* add more info as per PR feedback

* PR feedback

* point raw scripts at master branch (#118)

* Update DESCRIPTION (#117)

Update version for new milestone.

* Fix: Removed anaconda from path (#119)

* Removed anaconda from environment path

* Line is too long for blobxfer command

* For BioConductor install, force remove MRO 3.3 prior to installing MRO 3.4 (#120)

* force add PATH to current user

* Update bioc_setup.sh

* Check verbose null case (#121)

* Change True/False to TRUE/FALSE in README example (#124)

* add .gitiattrributes file to track line endings

* True and False are not valid in R; changed to TRUE and FALSE

* Fixed worker and merger scripts (#116)

* Fixed worker and merger scripts

* Fixed verbose logs based on PR comments

* Added documentation on error handling

* Fixed header on table markdown

* Fixed based on PR comments

* v0.4.3 Release (#131)

* Upgraded description to use rAzureBatch v0.5.1

* Updated change log for job failure
zfengms added a commit that referenced this pull request Oct 3, 2017
* treat warnings as failures and fail the creation of the cluster (#91)

* treat warnings as failures and fail the creation of the cluster

* fix unit tests

* fix lintr lines too long issue

* escape single quotes

* Check if existing pool is deleted when makeCluster is called (#99)

* Added deleting pool check for makeCluster

* Fixed double quotes

* cluster logs renamed from pool to cluster

* Added correct imports and fix range

* Feature/bio conductor docs (#106)

* initial command line instructions for bioconductor

* initial startup scripts for installing bioconductor

* fix if then syntax

* force update node environment with update path for R runtime

* install bioconductor

* wrap bioconductor install command in Rscript

* bioconductor sample docs

* update bioC docs

* remove .gitignore rule for .json files

* add pointer to BioC cluster config from docs

* Feature/cluster logs (#98)

* download merge result gets content raw

* Added setHttpTraffic and logging functions docs

* Fixed broken links

* Shorten lines down to 120 characters

* download merge result gets content raw

* Added setHttpTraffic and logging functions docs

* Fixed broken links

* Shorten lines down to 120 characters

* Renamed function names from past discussion

* Fixed log documentation

* Added new operations for storage management

* Added dont run examples

* Fixed unused arg for running example

* Updated docs for storage management

* Added a new doc dedicated for managing storage

* Added attribute for container name in data frame

* Fixed downloadBlob to work with new rAzureBatch function

* Updated docs based on PR comments

* Changed dependency version to razurebatch 0.5.0

* Feature/add azure files cluster config (#108)

* initial command line instructions for bioconductor

* initial startup scripts for installing bioconductor

* fix if then syntax

* force update node environment with update path for R runtime

* install bioconductor

* wrap bioconductor install command in Rscript

* bioconductor sample docs

* update bioC docs

* remove .gitignore rule for .json files

* add pointer to BioC cluster config from docs

* add missing azureFiles cluster config to samples

* Add 0.4.2 CHANGELOG comments (#111)

* Added live scenario test (#107)

* Added live scenario test so users do not have to write their own sample code to test

* Added file names for test live

* Removed single quote linter

* Added comment about the reason for this test

* Wait for job preparation task function (#109)

* Fixed verbose for getDoParWorkers (#112)

* Feature/faq (#110)

* initial FAQ

* rename faq to FAQ

* merge FAQ and Troubleshooting docs

* add info on how to reboot a node

* refrence TSG and FAQ from main docs index page

* add more info as per PR feedback

* PR feedback

* point raw scripts at master branch (#118)

* Update DESCRIPTION (#117)

Update version for new milestone.

* Fix: Removed anaconda from path (#119)

* Removed anaconda from environment path

* Line is too long for blobxfer command

* For BioConductor install, force remove MRO 3.3 prior to installing MRO 3.4 (#120)

* force add PATH to current user

* Update bioc_setup.sh

* Check verbose null case (#121)

* Change True/False to TRUE/FALSE in README example (#124)

* add .gitiattrributes file to track line endings

* True and False are not valid in R; changed to TRUE and FALSE

* Fixed worker and merger scripts (#116)

* Fixed worker and merger scripts

* Fixed verbose logs based on PR comments

* Added documentation on error handling

* Fixed header on table markdown

* Fixed based on PR comments

* v0.4.3 Release (#131)

* Upgraded description to use rAzureBatch v0.5.1

* Updated change log for job failure

* readme.md update

* Merge from feature/getjobresult for long running job support (#130)

* Added set chunk size

* Added cluster configuration validation function (#30)

* Added pool config test validation

* Added a fix for validation

* Added if checks for null tests and more validation tests

* Install R packages at job run time (#29)

* Added cran/github installation scripts

* Added package installation tests

* Upgraded package version to 0.3.2

* Output file support (#40)

* Output files support

* Added createOutputFile method

* output files readme documentation

* added tests and find container sas

* Added more detailed variable names

* Enable/disable merge task (#39)

* Merge task pass params

* Fixed enableMerge cases

* Merge task documentation on README.md

* Fixed typo on merge task description

* Update doAzureParallel.R

* Changed enableMerge to enableCloudCombine

* convert getJobResult output from binary to text

* Only write vector to temp file

* save cloud merge enabled, chunk size and packages as job metadata

* update cloudMergeEnabled to cloudCombineEnabled

* Fix/backwards compatible (#68)

* Added backwards compatible in make cluster

* Added deprecated config validator

* Added mismatch label

* Added validation for quota limits and bad getPool requests in waitForNodesToComplete (#52)

* Added validation for quota limits and bad getPool requests

* Fixed based on PR

* Fixed progress bar layout to use switch statements instead of if statements

* Changed clusterId to poolId

* Added comments and fixed messages

* Added running state to the node status

* Reformatted lines for function

* Added end statement for node completion

* Feature/custom script and reduce (#70)

* Added custom scripts and removed dependencies parameter

* Updated roxygen tool version

* Added parallelThreads support

* Added test coverage

* Removed verbose message on command line

* Added Reduce function for group of tasks

* Fix build because of doc semantics mismatch with function

* Removed unused function

* Added command line arg

* Added docs for custom script

* Moved customize cluster to separate doc for future usage

* Fixed typo

* Bug - Waiting for tasks to completion function ends too early (#69)

* Moved wait for tasks to complete to doAzureParallel utility

* Removed unneeded variables and progress

* Fixed camel case for skiptoken

* Travis/lintr (#72)

* Added lintr config file

* Added travis github package installation

* Removed snake case rule

* Fixed documents on doAzureParallel

* Based on lintr default_settins docs, correctly added default rules

* Updated lintr package to use object_name_style

* Added package :: operator

* Reformatted after merge

* Fixed command line tests

* Upgraded roxygen to 6.0.1

* Cluster config docs

* Removed additional delete job

* add getJob api (#84)

* add getJob api

* reformat code

* update styling in utility file

* fix code styling

* update chunksize to chunkSize and remove unused code

* handle job metadata in getJob api

* fix styling issue

* update getJobList parameter from list of job ids to filter object, and output jobs status in data frame (#128)

long running job support, getJob, getJobList and getJobResult implementation

* reformat code

* update styling in utility file

* fix code styling

* update chunksize to chunkSize and remove unused code

* handle job metadata in getJob api

* fix styling issue

* use counting service api in getJobList

* fix coding style

* return data frame from getJobList

* update getJobList parameter from job id list to filter by state

* reformat code

* update description for getJobList

* remove dup code

* address review feedback

* jobId parameter check for getJobResult

* update documentation for long run job

* update version to 0.5.0

* update version

* address review feedback

* update chunkSizeValue to chunkSizeKeyValuePair

* Validate job names and pool names (#129)

* Added validator class

* Added validators for lintr

* Added exclusion list for validators

* fix bug in metadata handling for packages and enableCloudCombine (#133)

* fix bug in metadata handling for packages and enableCloudCombine

* call long running job api in test

* update test

* add test for long running job feature

* code style fix

* update job state description in readme

* use list for job state filter

* address review feedback
paselem added a commit that referenced this pull request Nov 3, 2017
* treat warnings as failures and fail the creation of the cluster (#91)

* treat warnings as failures and fail the creation of the cluster

* fix unit tests

* fix lintr lines too long issue

* escape single quotes

* Check if existing pool is deleted when makeCluster is called (#99)

* Added deleting pool check for makeCluster

* Fixed double quotes

* cluster logs renamed from pool to cluster

* Added correct imports and fix range

* Feature/bio conductor docs (#106)

* initial command line instructions for bioconductor

* initial startup scripts for installing bioconductor

* fix if then syntax

* force update node environment with update path for R runtime

* install bioconductor

* wrap bioconductor install command in Rscript

* bioconductor sample docs

* update bioC docs

* remove .gitignore rule for .json files

* add pointer to BioC cluster config from docs

* Feature/cluster logs (#98)

* download merge result gets content raw

* Added setHttpTraffic and logging functions docs

* Fixed broken links

* Shorten lines down to 120 characters

* download merge result gets content raw

* Added setHttpTraffic and logging functions docs

* Fixed broken links

* Shorten lines down to 120 characters

* Renamed function names from past discussion

* Fixed log documentation

* Added new operations for storage management

* Added dont run examples

* Fixed unused arg for running example

* Updated docs for storage management

* Added a new doc dedicated for managing storage

* Added attribute for container name in data frame

* Fixed downloadBlob to work with new rAzureBatch function

* Updated docs based on PR comments

* Changed dependency version to razurebatch 0.5.0

* Feature/add azure files cluster config (#108)

* initial command line instructions for bioconductor

* initial startup scripts for installing bioconductor

* fix if then syntax

* force update node environment with update path for R runtime

* install bioconductor

* wrap bioconductor install command in Rscript

* bioconductor sample docs

* update bioC docs

* remove .gitignore rule for .json files

* add pointer to BioC cluster config from docs

* add missing azureFiles cluster config to samples

* Add 0.4.2 CHANGELOG comments (#111)

* Added live scenario test (#107)

* Added live scenario test so users do not have to write their own sample code to test

* Added file names for test live

* Removed single quote linter

* Added comment about the reason for this test

* Wait for job preparation task function (#109)

* Fixed verbose for getDoParWorkers (#112)

* Feature/faq (#110)

* initial FAQ

* rename faq to FAQ

* merge FAQ and Troubleshooting docs

* add info on how to reboot a node

* refrence TSG and FAQ from main docs index page

* add more info as per PR feedback

* PR feedback

* point raw scripts at master branch (#118)

* Update DESCRIPTION (#117)

Update version for new milestone.

* Fix: Removed anaconda from path (#119)

* Removed anaconda from environment path

* Line is too long for blobxfer command

* For BioConductor install, force remove MRO 3.3 prior to installing MRO 3.4 (#120)

* force add PATH to current user

* Update bioc_setup.sh

* Check verbose null case (#121)

* Change True/False to TRUE/FALSE in README example (#124)

* add .gitiattrributes file to track line endings

* True and False are not valid in R; changed to TRUE and FALSE

* Fixed worker and merger scripts (#116)

* Fixed worker and merger scripts

* Fixed verbose logs based on PR comments

* Added documentation on error handling

* Fixed header on table markdown

* Fixed based on PR comments

* v0.4.3 Release (#131)

* Upgraded description to use rAzureBatch v0.5.1

* Updated change log for job failure

* readme.md update

* Merge from feature/getjobresult for long running job support (#130)

* Added set chunk size

* Added cluster configuration validation function (#30)

* Added pool config test validation

* Added a fix for validation

* Added if checks for null tests and more validation tests

* Install R packages at job run time (#29)

* Added cran/github installation scripts

* Added package installation tests

* Upgraded package version to 0.3.2

* Output file support (#40)

* Output files support

* Added createOutputFile method

* output files readme documentation

* added tests and find container sas

* Added more detailed variable names

* Enable/disable merge task (#39)

* Merge task pass params

* Fixed enableMerge cases

* Merge task documentation on README.md

* Fixed typo on merge task description

* Update doAzureParallel.R

* Changed enableMerge to enableCloudCombine

* convert getJobResult output from binary to text

* Only write vector to temp file

* save cloud merge enabled, chunk size and packages as job metadata

* update cloudMergeEnabled to cloudCombineEnabled

* Fix/backwards compatible (#68)

* Added backwards compatible in make cluster

* Added deprecated config validator

* Added mismatch label

* Added validation for quota limits and bad getPool requests in waitForNodesToComplete (#52)

* Added validation for quota limits and bad getPool requests

* Fixed based on PR

* Fixed progress bar layout to use switch statements instead of if statements

* Changed clusterId to poolId

* Added comments and fixed messages

* Added running state to the node status

* Reformatted lines for function

* Added end statement for node completion

* Feature/custom script and reduce (#70)

* Added custom scripts and removed dependencies parameter

* Updated roxygen tool version

* Added parallelThreads support

* Added test coverage

* Removed verbose message on command line

* Added Reduce function for group of tasks

* Fix build because of doc semantics mismatch with function

* Removed unused function

* Added command line arg

* Added docs for custom script

* Moved customize cluster to separate doc for future usage

* Fixed typo

* Bug - Waiting for tasks to completion function ends too early (#69)

* Moved wait for tasks to complete to doAzureParallel utility

* Removed unneeded variables and progress

* Fixed camel case for skiptoken

* Travis/lintr (#72)

* Added lintr config file

* Added travis github package installation

* Removed snake case rule

* Fixed documents on doAzureParallel

* Based on lintr default_settins docs, correctly added default rules

* Updated lintr package to use object_name_style

* Added package :: operator

* Reformatted after merge

* Fixed command line tests

* Upgraded roxygen to 6.0.1

* Cluster config docs

* Removed additional delete job

* add getJob api (#84)

* add getJob api

* reformat code

* update styling in utility file

* fix code styling

* update chunksize to chunkSize and remove unused code

* handle job metadata in getJob api

* fix styling issue

* update getJobList parameter from list of job ids to filter object, and output jobs status in data frame (#128)

long running job support, getJob, getJobList and getJobResult implementation

* reformat code

* update styling in utility file

* fix code styling

* update chunksize to chunkSize and remove unused code

* handle job metadata in getJob api

* fix styling issue

* use counting service api in getJobList

* fix coding style

* return data frame from getJobList

* update getJobList parameter from job id list to filter by state

* reformat code

* update description for getJobList

* remove dup code

* address review feedback

* jobId parameter check for getJobResult

* update documentation for long run job

* update version to 0.5.0

* update version

* address review feedback

* update chunkSizeValue to chunkSizeKeyValuePair

* Validate job names and pool names (#129)

* Added validator class

* Added validators for lintr

* Added exclusion list for validators

* fix bug in metadata handling for packages and enableCloudCombine (#133)

* fix bug in metadata handling for packages and enableCloudCombine

* call long running job api in test

* update test

* add test for long running job feature

* code style fix

* update job state description in readme

* use list for job state filter

* address review feedback
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants