Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BOUNTY: Improve and complete "Serial" over the cloud code #3

Closed
digistump opened this issue Jan 15, 2016 · 3 comments
Closed

BOUNTY: Improve and complete "Serial" over the cloud code #3

digistump opened this issue Jan 15, 2016 · 3 comments
Labels

Comments

@digistump
Copy link
Collaborator

Skills Required: Intermediate at C/C++ and Arduino integration
Difficulty: Very easy for someone with the above skills

PLEASE NOTE: Anyone is welcome to start on this based on the current commit, but the final pull request is expected to have any changes made to complete the remaining scope fixes for #1 into this.

This should be integrated into the OakParticle.h/cpp CloudClass class,
and use stream and print (like SoftSerial and HardwareSerial do) so that
Particle.read(), Particle.write(), Particle.print(), Particle.readBytesUntil(),
etc etc work

spark_subscribe and spark_send_event are in particle_core.cpp

I think that part is pretty straightforward as it was working in early versions
of the codebase.

Challenges/Thoughts:

Can this be made to work without a transmit buffer? Or any other idea that reduces RAM usage?
We don't want to send just one character at a time to the cloud, so some buffer seems necessary.

The buffers should not be allocated until Particle.begin() (spark_serial_begin) is called so those
that don't use this feature don't have to give up the RAM for the buffers. Likewise it would be
nice to de-allocate them if end is called.

To test:

NOTE: We are asking you to use the standard ESP8266 core to ensure no conflict/errors are from our other core changes. We will be uploading the rest of the Oak core tomorrow, but want to focus on this first. Because of this pin numbers and other things will not match the Oak, but that does not matter for this work since no physical testing is needed.

  • Complete the above work, ensure this example compiles:

We will complete all actual device testing, and that is not required for the bounty.

Code to work from/use for understanding:

#define MAX_SERIAL_BUFF 256 


char _receive_buffer[MAX_SERIAL_BUFF]; 
char _transmit_buffer[MAX_SERIAL_BUFF]; 
volatile uint8_t _receive_buffer_tail = 0;
volatile uint8_t _receive_buffer_head = 0;
volatile uint8_t _transmit_buffer_tail = 0;
volatile uint8_t _transmit_buffer_head = 0;
volatile uint8_t _listening;
volatile uint8_t _buffer_overflow;

void spark_serial_begin(){
  //don't allocate buffers until this is called
  spark_subscribe("oak/device/stdin", spark_get_rx, NULL, MY_DEVICES, NULL, NULL);

}

void spark_serial_end()
{
  //de-allocate buffers here
}

// Read data from buffer
int spark_serial_read()
{
  // Empty buffer?
  if (_receive_buffer_head == _receive_buffer_tail)
    return -1;

  // Read from "head"
  uint8_t d = _receive_buffer[_receive_buffer_head]; // grab next byte
  _receive_buffer_head = (_receive_buffer_head + 1) % MAX_SERIAL_BUFF;
  return d;
}

int spark_serial_available()
{
  return (_receive_buffer_tail + MAX_SERIAL_BUFF - _receive_buffer_head) % MAX_SERIAL_BUFF;
}

size_t spark_serial_write(uint8_t b)
{
    // if buffer full, set the overflow flag and return
    uint8_t next = (_transmit_buffer_tail + 1) % MAX_SERIAL_BUFF;
    if (next != _transmit_buffer_head)
    {
      // save new data in buffer: tail points to where byte goes
      _transmit_buffer[_transmit_buffer_tail] = b; // save new byte
      _transmit_buffer_tail = next;
      return 1;
    } 
    else 
    {
      _buffer_overflow = true;
      return 0;
    }
}

void spark_serial_flush()
{
  _transmit_buffer_tail = _transmit_buffer_head;
}

int spark_serial_peek()
{
  // Empty buffer?
  if (_receive_buffer_head == _receive_buffer_tail)
    return -1;

  // Read from "head"
  return _receive_buffer[_receive_buffer_head];
}


void spark_get_rx(const char* name, const char* data){ //this is automatically called when new data comes from the cloud
  if (data && *data) {

    while(*data != '\0'){
      // if buffer full, set the overflow flag and return
      uint8_t next = (spark_receive_buffer_tail + 1) % MAX_SERIAL_BUFF;
      if (next != spark_receive_buffer_head)
      {
        // save new data in buffer: tail points to where byte goes
        spark_receive_buffer[spark_receive_buffer_tail] = *data; // save new byte
        data++;
        spark_receive_buffer_tail = next;
      } 
      else 
      {
        spark_buffer_overflow = true;
        return;
      }
    }

  }
}

void spark_send_tx(){

  if(spark_transmit_buffer_tail == spark_transmit_buffer_head)//nothing buffer
    return;
  uint8_t buffer_length = (spark_transmit_buffer_tail + MAX_SERIAL_BUFF - spark_transmit_buffer_head) % MAX_SERIAL_BUFF;
  char buff[buffer_length];

  for(uint8_t b;b<buffer_length;b++){
    // Read from "head"
    buff[b] = spark_transmit_buffer[spark_transmit_buffer_head]; // grab next byte
    spark_transmit_buffer_head = (spark_transmit_buffer_head + 1) % MAX_SERIAL_BUFF;
  }

  spark_send_event("oak/device/stdout", buff, 60, PRIVATE, NULL); 
}



void spark_process(){

  spark_send_tx();

//EXISTING CODE IN SPARK PROCESS HERE

}

Bounty

$50 cash or $100 credit or 10 Oaks

Deadline: 23:59 Saturday January 16th UTC - the first pull request that meets all requirements will win the bounty. Additional improvements after that may or may not be rewarded, depending on the contribution, at our discretion.

If you are sure you can do this, and able to do this quickly - please feel free to respond to this issue and say that you are working on it and when you will complete it, so that others don't waste their time on it. If you are just entertaining the idea/unsure if you can do it, please don't "claim" it until you are sure.

Cash or credit is your choice. Cash to be paid via Paypal. Credit has no expiration.

You may credit yourself in the files as well, leaving in tact existing licenses and credits.

Legal Stuff: We will choose a winner at our sole discretion. The winner will be the first pull request that submits fully working code meeting the above requirements and following good coding practices, based on the timestamp of the pull request. Bounty will be awarded (or in the case of Oaks, sent) within 48 hours of confirming winner. Cash awards will be made in USD. This is not an offer for hire. All work submitted becomes the property of Digistump LLC to be used at our discretion in compliance with any associated licenses. Void where prohibited by law.

@danielmawhirter
Copy link
Contributor

The code above shows a mix of "spark_transmit_buffer" and "_transmit_buffer", are these meant to be different things or the same? Other than this, I am ready to submit a pull request

@digistump
Copy link
Collaborator Author

@danielmawhirter they should all read with "spark" at the front of the variable name - just grabbed this from my notes and an old build and didn't realize they didn't match, sorry about that!

@digistump
Copy link
Collaborator Author

Closing this as @danielmawhirter has won the bounty

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant