You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Skills Required: Intermediate at C/C++ and Arduino integration Difficulty: Very easy for someone with the above skills
PLEASE NOTE: Anyone is welcome to start on this based on the current commit, but the final pull request is expected to have any changes made to complete the remaining scope fixes for #1 into this.
This should be integrated into the OakParticle.h/cpp CloudClass class,
and use stream and print (like SoftSerial and HardwareSerial do) so that
Particle.read(), Particle.write(), Particle.print(), Particle.readBytesUntil(),
etc etc work
spark_subscribe and spark_send_event are in particle_core.cpp
I think that part is pretty straightforward as it was working in early versions
of the codebase.
Challenges/Thoughts:
Can this be made to work without a transmit buffer? Or any other idea that reduces RAM usage?
We don't want to send just one character at a time to the cloud, so some buffer seems necessary.
The buffers should not be allocated until Particle.begin() (spark_serial_begin) is called so those
that don't use this feature don't have to give up the RAM for the buffers. Likewise it would be
nice to de-allocate them if end is called.
Download this repository and copy it over the core files for the ESP core.
NOTE: We are asking you to use the standard ESP8266 core to ensure no conflict/errors are from our other core changes. We will be uploading the rest of the Oak core tomorrow, but want to focus on this first. Because of this pin numbers and other things will not match the Oak, but that does not matter for this work since no physical testing is needed.
Complete the above work, ensure this example compiles:
We will complete all actual device testing, and that is not required for the bounty.
Code to work from/use for understanding:
#defineMAX_SERIAL_BUFF256char _receive_buffer[MAX_SERIAL_BUFF];
char _transmit_buffer[MAX_SERIAL_BUFF];
volatileuint8_t _receive_buffer_tail = 0;
volatileuint8_t _receive_buffer_head = 0;
volatileuint8_t _transmit_buffer_tail = 0;
volatileuint8_t _transmit_buffer_head = 0;
volatileuint8_t _listening;
volatileuint8_t _buffer_overflow;
voidspark_serial_begin(){
//don't allocate buffers until this is calledspark_subscribe("oak/device/stdin", spark_get_rx, NULL, MY_DEVICES, NULL, NULL);
}
voidspark_serial_end()
{
//de-allocate buffers here
}
// Read data from bufferintspark_serial_read()
{
// Empty buffer?if (_receive_buffer_head == _receive_buffer_tail)
return -1;
// Read from "head"uint8_t d = _receive_buffer[_receive_buffer_head]; // grab next byte
_receive_buffer_head = (_receive_buffer_head + 1) % MAX_SERIAL_BUFF;
return d;
}
intspark_serial_available()
{
return (_receive_buffer_tail + MAX_SERIAL_BUFF - _receive_buffer_head) % MAX_SERIAL_BUFF;
}
size_tspark_serial_write(uint8_t b)
{
// if buffer full, set the overflow flag and returnuint8_t next = (_transmit_buffer_tail + 1) % MAX_SERIAL_BUFF;
if (next != _transmit_buffer_head)
{
// save new data in buffer: tail points to where byte goes
_transmit_buffer[_transmit_buffer_tail] = b; // save new byte
_transmit_buffer_tail = next;
return1;
}
else
{
_buffer_overflow = true;
return0;
}
}
voidspark_serial_flush()
{
_transmit_buffer_tail = _transmit_buffer_head;
}
intspark_serial_peek()
{
// Empty buffer?if (_receive_buffer_head == _receive_buffer_tail)
return -1;
// Read from "head"return _receive_buffer[_receive_buffer_head];
}
voidspark_get_rx(constchar* name, constchar* data){ //this is automatically called when new data comes from the cloudif (data && *data) {
while(*data != '\0'){
// if buffer full, set the overflow flag and returnuint8_t next = (spark_receive_buffer_tail + 1) % MAX_SERIAL_BUFF;
if (next != spark_receive_buffer_head)
{
// save new data in buffer: tail points to where byte goes
spark_receive_buffer[spark_receive_buffer_tail] = *data; // save new byte
data++;
spark_receive_buffer_tail = next;
}
else
{
spark_buffer_overflow = true;
return;
}
}
}
}
voidspark_send_tx(){
if(spark_transmit_buffer_tail == spark_transmit_buffer_head)//nothing bufferreturn;
uint8_t buffer_length = (spark_transmit_buffer_tail + MAX_SERIAL_BUFF - spark_transmit_buffer_head) % MAX_SERIAL_BUFF;
char buff[buffer_length];
for(uint8_t b;b<buffer_length;b++){
// Read from "head"
buff[b] = spark_transmit_buffer[spark_transmit_buffer_head]; // grab next byte
spark_transmit_buffer_head = (spark_transmit_buffer_head + 1) % MAX_SERIAL_BUFF;
}
spark_send_event("oak/device/stdout", buff, 60, PRIVATE, NULL);
}
voidspark_process(){
spark_send_tx();
//EXISTING CODE IN SPARK PROCESS HERE
}
Bounty
$50 cash or $100 credit or 10 Oaks
Deadline: 23:59 Saturday January 16th UTC - the first pull request that meets all requirements will win the bounty. Additional improvements after that may or may not be rewarded, depending on the contribution, at our discretion.
If you are sure you can do this, and able to do this quickly - please feel free to respond to this issue and say that you are working on it and when you will complete it, so that others don't waste their time on it. If you are just entertaining the idea/unsure if you can do it, please don't "claim" it until you are sure.
Cash or credit is your choice. Cash to be paid via Paypal. Credit has no expiration.
You may credit yourself in the files as well, leaving in tact existing licenses and credits.
Legal Stuff: We will choose a winner at our sole discretion. The winner will be the first pull request that submits fully working code meeting the above requirements and following good coding practices, based on the timestamp of the pull request. Bounty will be awarded (or in the case of Oaks, sent) within 48 hours of confirming winner. Cash awards will be made in USD. This is not an offer for hire. All work submitted becomes the property of Digistump LLC to be used at our discretion in compliance with any associated licenses. Void where prohibited by law.
The text was updated successfully, but these errors were encountered:
The code above shows a mix of "spark_transmit_buffer" and "_transmit_buffer", are these meant to be different things or the same? Other than this, I am ready to submit a pull request
@danielmawhirter they should all read with "spark" at the front of the variable name - just grabbed this from my notes and an old build and didn't realize they didn't match, sorry about that!
Skills Required: Intermediate at C/C++ and Arduino integration
Difficulty: Very easy for someone with the above skills
PLEASE NOTE: Anyone is welcome to start on this based on the current commit, but the final pull request is expected to have any changes made to complete the remaining scope fixes for #1 into this.
This should be integrated into the OakParticle.h/cpp CloudClass class,
and use stream and print (like SoftSerial and HardwareSerial do) so that
Particle.read(), Particle.write(), Particle.print(), Particle.readBytesUntil(),
etc etc work
spark_subscribe and spark_send_event are in particle_core.cpp
I think that part is pretty straightforward as it was working in early versions
of the codebase.
Challenges/Thoughts:
Can this be made to work without a transmit buffer? Or any other idea that reduces RAM usage?
We don't want to send just one character at a time to the cloud, so some buffer seems necessary.
The buffers should not be allocated until Particle.begin() (spark_serial_begin) is called so those
that don't use this feature don't have to give up the RAM for the buffers. Likewise it would be
nice to de-allocate them if end is called.
To test:
NOTE: We are asking you to use the standard ESP8266 core to ensure no conflict/errors are from our other core changes. We will be uploading the rest of the Oak core tomorrow, but want to focus on this first. Because of this pin numbers and other things will not match the Oak, but that does not matter for this work since no physical testing is needed.
We will complete all actual device testing, and that is not required for the bounty.
Code to work from/use for understanding:
Bounty
$50 cash or $100 credit or 10 Oaks
Deadline: 23:59 Saturday January 16th UTC - the first pull request that meets all requirements will win the bounty. Additional improvements after that may or may not be rewarded, depending on the contribution, at our discretion.
If you are sure you can do this, and able to do this quickly - please feel free to respond to this issue and say that you are working on it and when you will complete it, so that others don't waste their time on it. If you are just entertaining the idea/unsure if you can do it, please don't "claim" it until you are sure.
Cash or credit is your choice. Cash to be paid via Paypal. Credit has no expiration.
You may credit yourself in the files as well, leaving in tact existing licenses and credits.
Legal Stuff: We will choose a winner at our sole discretion. The winner will be the first pull request that submits fully working code meeting the above requirements and following good coding practices, based on the timestamp of the pull request. Bounty will be awarded (or in the case of Oaks, sent) within 48 hours of confirming winner. Cash awards will be made in USD. This is not an offer for hire. All work submitted becomes the property of Digistump LLC to be used at our discretion in compliance with any associated licenses. Void where prohibited by law.
The text was updated successfully, but these errors were encountered: