-
Notifications
You must be signed in to change notification settings - Fork 397
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Track mean & standard deviation of text length as a metric for text feature #354
Conversation
Codecov Report
@@ Coverage Diff @@
## master #354 +/- ##
==========================================
- Coverage 86.8% 86.69% -0.11%
==========================================
Files 336 336
Lines 10928 10943 +15
Branches 354 343 -11
==========================================
+ Hits 9486 9487 +1
- Misses 1442 1456 +14
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please add a test where avgTextLen
result is not 0
core/src/main/scala/com/salesforce/op/filters/FeatureDistribution.scala
Outdated
Show resolved
Hide resolved
core/src/main/scala/com/salesforce/op/filters/FeatureDistribution.scala
Outdated
Show resolved
Hide resolved
core/src/main/scala/com/salesforce/op/filters/FeatureDistribution.scala
Outdated
Show resolved
Hide resolved
core/src/main/scala/com/salesforce/op/filters/RawFeatureFilter.scala
Outdated
Show resolved
Hide resolved
core/src/main/scala/com/salesforce/op/filters/FeatureDistribution.scala
Outdated
Show resolved
Hide resolved
core/src/main/scala/com/salesforce/op/filters/AllFeatureInformation.scala
Outdated
Show resolved
Hide resolved
…tionMonoid for less boilerplate in FeatureDistribution aggregation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@TuanNguyen27 please remember to clean the commit message prior to merging next time. |
FYI we are running into some issues with this in spark 3 which has json4s 3.6.6 instead of 3.5.3 seems it dislikes the companion object for Moments with its own apply methods which have context bounds.
|
thanks for the heads up! @TuanNguyen27 can you please add the FeatureDistribution class and subclasses to the json serialization formats for record insights https://github.com/salesforce/TransmogrifAI/blob/master/core/src/main/scala/com/salesforce/op/ModelInsights.scala#L394 |
Bug fixes: - Ensure correct metrics despite model failures on some CV folds [#404](#404) - Fix flaky `ModelInsight` tests [#395](#395) - Avoid creating `SparseVector`s for LOCO [#377](#377) New features / updates: - Model combiner [#385](#399) - Added new sample for HousingPrices [#365](#365) - Test to verify that custom metrics appear in model insight metrics [#387](#387) - Add `FeatureDistribution` to `SerializationFormat`s [#383](#383) - Add metadata to `OpStandadrdScaler` to allow for descaling [#378](#378) - Improve json serde error in `evalMetFromJson` [#380](#380) - Track mean & standard deviation as metrics for numeric features and for text length of text features [#354](#354) - Making model selectors robust to failing models [#372](#372) - Use compact and compressed model json by default [#375](#375) - Descale feature contribution for Linear Regression & Logistic Regression [#345](#345) Dependency updates: - Update tika version [#382](#382)
Thanks for the contribution! Unfortunately we can't verify the commit author(s): Leah McGuire <l***@s***.com>. One possible solution is to add that email to your GitHub account. Alternatively you can change your commits to another email and force push the change. After getting your commits associated with your GitHub account, refresh the status of this Pull Request. |
Problem context
If not treated as a categorical, text features are tokenized and hashed during feature engineering. However, things like IDs, dates, geographical information, etc should be treated differently. Even when hashing is the right approach, the current default hash space of TransmogrifAI is too small to capture all the information in the text. To better detect what is contained in a text field and dynamically determine an appropriate hash space, we want to track the mean and standard deviation of the string length of a text feature.
Describe the proposed solution
Mean and Std of text length is computed inside
RawFeatureFilter
and will be part ofFeatureDistribution
.Describe alternatives you've considered
N/A.
RawFeatureFilter
is the appropriate place to track this information, because similar calculations (e.g distribution of tokens) also happen here, and the additional information about text length could help inform these other calculations to remove raw features more intelligently.