Skip to content

Commit eb6bf3a

Browse files
authored
Add SimpleTable construct based on Named Tuples (#81)
## Abstract In this PR, introduce a new `com.lihaoyi::scalasql-namedtuples` module, supporting only Scala 3.7+, with three main contributions: - `scalasql.namedtuples.SimpleTable` class as an alternative to `Table`. The pitch is that you can use a "basic" case class with no higher-kinded type parameters to represent your models. - `scalasql.namedtuples.NamedTupleQueryable` - provides implicit `Queryable` so you can return a named tuple from a query. - `scalasql.simple` package object, that re-exports `scalasql` package, plus `SimpleTable` and `NamedTupleQueryable` ## Example **Defining a table** ```scala import scalasql.simple.{*, given} case class City( id: Int, name: String, countryCode: String, district: String, population: Long ) object City extends SimpleTable[City] ``` **return named tuples from queries** ```scala val query = City.select.map(c => (name = c.name, population = c.population)) val res: Seq[(name: String, population: Long)] = db.run(query) ``` ## Design This PR manages to introduce `SimpleTable` entirely without needing to change the core library. It is designed around leveraging the new Named Tuples and programmatic structural typing facilities introduced in Scala 3.7. No macros are needed. It was also designed such that any query should be pretty much source compatible if you change from `Table` to `SimpleTable` - (with the exception of dropping `[Sc]` type arguments). Within a query, e.g. `City.select.map(c => ...)` we still need `c` to be an object that has all the fields of `City`, but they need to be wrapped by either `scalasql.Expr[T]` or `scalasql.Column[T]`. With `Table` this is done by the case class having an explicit type parameter (e.g. `City[T[_]](name: T[String]...)`) - so you would just substitute the parameter. Of course with `SimpleTable` the main idea is that you do not declare this `T[_]` type parameter, but the `scalasql.query` package expects it to be there. The solution in this PR is to represent the table row within queries by `Record[City, Expr]`, (rather than `City[Expr]`). `Record[C, T[_]` is a new class, and essentially a structurally typed tuple that extends `scala.Selectable` with a named tuple `Fields` type member, derived mapping `T` over `NamedTuple.From[C]`. `Record` (and `SimpleTable`) still support using a nested case class field to share common columns (with a caveat*). When you return a `Record[C, T]` from a query, you need to still get back a `C`, so `SimpleTable` provides an implicit `Queryable.Row[Record[C, Expr], C]`, which is generated by compiletime derivation (via `inline` methods). ### Implementation To make a simpler diff, `SimpleTable` is entirely defined in terms of `Table`. i.e. here is the signature: ```scala class SimpleTable[C]( using name: sourcecode.Name, metadata0: Table.Metadata[[T[_]] =>> SimpleTable.MapOver[C, T]] ) extends Table[[T[_]] =>> SimpleTable.MapOver[C, T]](using name, metadata0) ``` The `metadata0` argument is expected to be generated automatically from an inline given in `SimpleTableMacros.scala` (I suggest to rename to `SimpleTableDerivation.scala`) `Table[V[_[_]]`, being a higher kinded type, normally expects some `case class Foo[T[_]]`, and fills in various places `V[Expr]` or `V[Column]` in queries, and `V[Sc]` for results. However for `SimpleTable` when `T[_]` is `scalasql.Sc` we want to return `C` and otherwise return this `Record[C, T]` so `MapOver` needs to be a match type: ```scala object SimpleTable { type MapOver[C, T[_]] = T[Internal.Tombstone.type] match { case Internal.Tombstone.type => C // T is `Sc` case _ => Record[C, T] } } ``` (`Tombstone` is used here to try and introduce a unique type that would never be used for any other purpose, i.e. be disjoint in the eyes of the match type resolver - also so we can convince ourselves that if `T` returns `Tombstone` it is probably the identity and not some accident.) See #83 for another approach that eliminates removes the `V[_[_]]` from `Table`, `Insert` and various other places. **Design of `Record`** `Record[C, T[_]]` is implemented as a structural type that tries to wrap the fields of `C` in `T`. It has a few design constraints: - When `C` has a field of type `X` that is a nested Table, the corresponding field in `Record[C, T]` must also be `Record[X, T]`. - when selecting a nested Record, preserve which type (e.g. `Expr` or `Column`) to wrap fields in from the outer level. - simple types in the IDE First decision: `Record` uses a `Fields` type member for structural selection, rather than the traditional type refinements. Why: - can be constructed without macros - internals can be based on `Array` rather than a hash map, - `Fields` derived via `NamedTuple.From[C]` is treated as part of the class implementation, this means you never get a huge refinement type showing up whenever you hover in the IDE. Second decision: how to decide which fields are "scalar" data and which are nested records. Constraints: - previously with `Table`, the only evidence that a field of type `X` is a nested table is implicit evidence of type `Table.ImplicitMetadata[X]` - match types can only dispatch on statically known information, and there is not currently any match type (or `scala.compiletime` intrinsic) that can tell you if there exists an implicit of type `X`. Choices: - [ ] pre-compute the transitive closure of all possible nested fields as a third type argument to record, which in typical cases would be empty - [ ] require that the field have some marker, e.g. `foo: Ref[Foo]`, unclear how much this would be intrusive at each use-site - [x] introduce a marker class (`SimpleTable.Nested`) that the nested case class should extend - this does however prevent using "third party" classes as a nested table The implicit derivation of Metadata also enforces that whenever an implicit metadata is discovered for use as field, the class must extend `SimpleTable.Nested`. ```scala object SimpleTable { // needs to be a class so the match type reducer can "prove disjoint" to various other types. abstract class Nested final class Record[C, T[_]](private val data: IArray[AnyRef]) extends Selectable: /** * For each field `x: X` of class `C` there exists a field `x` in this record of type * `Record[X, T]` if `X` is a case class that represents a table, or `T[X]` otherwise. */ type Fields = NamedTuple.Map[ NamedTuple.From[C], [X] =>> X match { case SimpleTable.Nested => Record[X, T] case _ => T[X] } ] } } ``` ### Alternatives **Why is there `Record[C, T]` and not `ExprRecord[C]` or `ColumnRecord[C]` classes?** This was explored in #83, which requires a large change to the `scalasql.query` package, i.e. a new type hierarchy for `Table` (but makes more explicit in types the boundary between read-only queries, column updates, and results). It's also unclear if it relies upon "hacks" to work. **Why use `Record[C, T]` and not named tuples in queries?** 1. its almost impossible and (expensive when possible) to preserve the mapping that a large named tuple type (with no reference to the original class) should map back to the class after running the query 2. Also would be ambiguous with when you explicitly want to return a named tuple, rather than map back to the table class. 3. Record is a very cheap association directly back to the class it derives from, it also is a compact type if ever needed to be written explicitly, or shown by an IDE. **What is needed to get rid of `Simpletable.Nested`?** lets remind ourselves of the current definition of `SimpleTable`: ```scala class SimpleTable[C]( using name: sourcecode.Name, metadata0: Table.Metadata[[T[_]] =>> SimpleTable.MapOver[C, T]] ) extends Table[[T[_]] =>> SimpleTable.MapOver[C, T]](using name, metadata0) { given simpleTableGivenMetadata: SimpleTable.GivenMetadata[C] = SimpleTable.GivenMetadata(metadata0) } ``` First thing - we determined that the transitive closure of available implicit `SimpleTable.GivenMetadata[Foo]` needs to be added as an argument to `Record`. In #82 we explored this by just precomputing all the field types ahead of time in a macro, so the types would look a bit like `Record[City, Expr, (id: Expr[Long], name: Expr[String], nested: (fooId: Expr[Long], ...))]` which was very verbose. An alternative could be to pass as a type parameter the classes which have a metadata defined. Something like `Record[City, Expr, Foo | Bar]` or `Record[Foo, Expr, Empty.type]`, and modify the `Record` class as such: ```diff -final class Record[C, T[_]](private val data: IArray[AnyRef]) extends Selectable: +final class Record[C, T[_], <TABLES>](private val data: IArray[AnyRef]) extends Selectable: /** * For each field `x: X` of class `C` there exists a field `x` in this record of type * `Record[X, T]` if `X` is a case class that represents a table, or `T[X]` otherwise. */ type Fields = NamedTuple.Map[ NamedTuple.From[C], - [X] =>> X match { - case Nested => Record[X, T] + [X] =>> IsSub[X, <TABLES>] match { + case true => Record[X, T] case _ => T[X] } ] } ``` This could be a sweet spot between verbosity and extensibility to "uncontrolled" third party classes - but it is uncertain who in reality would be blocked by needing to extend `SimpleTable.Nested`. Also it is still to determine the potential impact on performance of compilation times, also the best place to compute this type without causing explosions of implicit searches. **You can see a prototype here: [bishabosha/scalasql#table-named-tuples-infer-nested-tables](https://github.com/bishabosha/scalasql/tree/feature/table-named-tuples-infer-nested-tables)** ## Build changes introduce top level `scalasql-namedtuples` module - publishes as `com.lihaoyi:scalasql-namedtuples_3` - scalaVersion `3.7.0` - sources are located in `scalasql/namedtuples` - depends on module `scalasql("3.6.2")` - so that it can re-export all of scalasql from the `scalasql.simple` package object Also declare `scalasql-namedtuples.test` module - sources in `scalasql/namedtuples/test` - depends on module `scalasql("3.6.2").test`, so the custom test framework can be used to capture test results. ## Testing changes The main approach to testing was to copy test sources that already exist, and convert them to use SimpleTable with otherwise no other changes. Assumptions made when copying: - the majority of existing `scalasql` tests are testing the query translation to SQL, rather than specifically the implementation of Table.Metadata generated by macros. - Since the only difference between using `SimpleTable` and `Table` are the signatures of implicits available, and the implementation of `Table.Metadata`, the test coverage for `SimpleTable` should focus on type checking, and that the fundamentals of TableMetadata are implemented correctly in a "round trip". - so I copied the tests from `scalasql/test/src/ExampleTests.scala`, `scalasql/test/src/datatypes/DataTypesTests.scala` and `scalasql/test/src/datatypes/OptionalTests.scala`, renaming the traits and switching from `Table` to `SimpleTable`, otherwise unchanged. - I also had to copy `scalasql/test/src/ConcreteTestSuites.scala` to `scalasql/namedtuples/test/src/SimpleTableConcreteTestSuites.scala`, commenting out most objects except `OptionalTests` and `DataTypesTests`, which now extend the duplicated and renamed suites. I also renamed the package to `scalasql.namedtuples` - finally I also copied `scalasql/test/src/WorldSqlTests.scala` (to `scalasql/namedtuples/test/src/example/WorldSqlTestsNamedTuple.scala`) to ensure that every example in `tutorial.md` compiles after switching to `SimpleTable`, and also to provide snippets I will include in the `tutorial.md`. - I also renamed a few tests in the duplicates of `OptionalTests.scala` and `DataTypesTests.scala` so that they would generate unique names that can be included in `reference.md`. New tests: - demonstrations of returning Named Tuples from the various `SimpleTableH2Example` tests - `scalasql/namedtuples/test/src/datatypes/LargeObjectTest.scala` to stress test the compiler for large sized classes. - `scalasql/namedtuples/test/src/example/foo.scala` for quick testing of compilation, typechecking etc. - replacement of case class `copy` method with `Record#updates` in `SimpleTableOptionalTests.scala` ## Documentation changes `tutorial.md` and `reference.md` are generated from scala source files and test results in `docs/generateDocs.mill`. I decided that rather than duplicate both `tutorial.md` and `reference.md` for `SimpleTable`, it would be better to avoid duplication, or potential drift, by reusing the original documents, but include specific notes when use of `SimpleTable` or `NamedTupleQueryable` adds new functionality or requires different code. ### tutorial.md To update `tutorial.md` I wrote the new text as usual in `WorldSqlTests.scala`. These texts exclusively talk about differences between the two approaches, such as declaring case classes, returning named tuples, or using the `updates` method on record. To support the new texts, I needed to include code snippets. But like in `WorldSqlTests.scala` I would prefer the snippets to be verified in a test suite. So the plan was to copy `WorldSqlTests.scala` to a new file, update the examples to use `SimpleTable` and include snippets from there. To support including snippets from another file I updated the `generateTutorial` task in `docs/generateDocs.mill`. The change was that if the scanner sees a line `// +INCLUDE SNIPPET [FOO] somefile` in `WorldSqlTests.scala`, then it switches to reading the lines from `somefile` file, looking for the first line containing `// +SNIPPET [FOO]`, then splices all lines of `some file` until it reaches a line containing `// -SNIPPET [FOO]`, then it switches back to reading the lines in `WorldSqlTests.scala`. The main idea is that snippets within `somefile` should be declared in the same order that they are included from `WorldSqlTests.scala`, meaning that the scanner traverses both files from top to bottom once (beginning from the previous position whenever switching back). So to declare the snippets as mention above I copied `WorldSqlTests.scala` to `scalasql/namedtuples/test/src/example/WorldSqlTestsNamedTuple.scala`, replaced `Table` by `SimpleTable` and declared in there the snippets I wanted (and included them from `WorldSqlTests.scala`) . Any other changes (e.g. newlines, indentation etc) are likely due to updating scalafmt. ### reference.md this file is generated by the `generateReference` task in `docs/generateDocs.mill`. It works by formatting the data from `out/recordedTests.json` (captured by running tests with a custom framework) and grouping tests by the suite they occur in. Like with `tutorial.md` I thought it best to only add extra snippets that highlight the differences between the two kinds of table. So first thing to capture the output of simple table tests, in the build I set the `SCALASQL_RECORDED_TESTS_NAME` and `SCALASQL_RECORDED_SUITE_DESCRIPTIONS_NAME` environment variables in the `scalasql-namedtuples.test` module: in this case `recordedTestsNT.json` and `out/recordedSuiteDescriptionsNT.json`. Next I updated the `generateReference` task so that it also includes the recorded outputs from `recordedTestsNT.json`. This task handles grouping of tests and removing duplicates (e.g. the `mysql`, `h2` variants). I made it so that for each Suite e.g. `DataTypes` it find the equivalent suite in the simple table results, and then only include the test names it hadn't seen at the end of that suite. So therefore to include any test result from `SimpleTableDataTypesTests.scala` or `SimpleTableOptionalTests.scala`, it is only necessary to rename an individual test, and it will be appended to the bottom of the relevant group in `reference.md`. For this PR I did this by adding a ` - with SimpleTable` suffix to relevant tests (i.e. the demonstration of nested classes, and the usage of `Record#updates` method)
1 parent a073d93 commit eb6bf3a

25 files changed

+5075
-79
lines changed

.scalafmt.conf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
version = "3.8.1"
1+
version = "3.9.5"
22

33
align.preset = none
44
align.openParenCallSite = false

build.mill

Lines changed: 62 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -8,10 +8,11 @@ import de.tobiasroeser.mill.vcs.version.VcsVersion
88
import com.goyeau.mill.scalafix.ScalafixModule
99
import mill._, scalalib._, publish._
1010

11-
val scalaVersions = Seq("2.13.12", "3.6.2")
11+
val scala3 = "3.6.2"
12+
val scalaVersions = Seq("2.13.12", scala3)
13+
val scala3NamedTuples = "3.7.0"
1214

13-
trait Common extends CrossScalaModule with PublishModule with ScalafixModule{
14-
def scalaVersion = crossScalaVersion
15+
trait CommonBase extends ScalaModule with PublishModule with ScalafixModule { common =>
1516

1617
def publishVersion = VcsVersion.vcsState().format()
1718

@@ -33,18 +34,13 @@ trait Common extends CrossScalaModule with PublishModule with ScalafixModule{
3334
Seq("-Wunused:privates,locals,explicits,implicits,params") ++
3435
Option.when(scalaVersion().startsWith("2."))("-Xsource:3")
3536
}
36-
}
37-
38-
39-
object scalasql extends Cross[ScalaSql](scalaVersions)
40-
trait ScalaSql extends Common{ common =>
41-
def moduleDeps = Seq(query, operations)
42-
def ivyDeps = Agg.empty[Dep] ++ Option.when(scalaVersion().startsWith("2."))(
43-
ivy"org.scala-lang:scala-reflect:${scalaVersion()}"
44-
)
4537

38+
def semanticDbVersion: T[String] =
39+
// last version that works with Scala 2.13.12
40+
"4.12.3"
4641

47-
object test extends ScalaTests with ScalafixModule{
42+
trait CommonTest extends ScalaTests with ScalafixModule {
43+
def semanticDbVersion: T[String] = common.semanticDbVersion
4844
def scalacOptions = common.scalacOptions
4945
def ivyDeps = Agg(
5046
ivy"com.github.vertical-blank:sql-formatter:2.0.4",
@@ -61,10 +57,51 @@ trait ScalaSql extends Common{ common =>
6157
ivy"com.zaxxer:HikariCP:5.1.0"
6258
)
6359

60+
def recordedTestsFile: String
61+
def recordedSuiteDescriptionsFile: String
62+
6463
def testFramework = "scalasql.UtestFramework"
6564

6665
def forkArgs = Seq("-Duser.timezone=Asia/Singapore")
67-
def forkEnv = Map("MILL_WORKSPACE_ROOT" -> T.workspace.toString())
66+
67+
def forkEnv = Map(
68+
"MILL_WORKSPACE_ROOT" -> T.workspace.toString(),
69+
"SCALASQL_RECORDED_TESTS_NAME" -> recordedTestsFile,
70+
"SCALASQL_RECORDED_SUITE_DESCRIPTIONS_NAME" -> recordedSuiteDescriptionsFile
71+
)
72+
}
73+
}
74+
trait Common extends CommonBase with CrossScalaModule
75+
76+
object `scalasql-namedtuples` extends CommonBase {
77+
def scalaVersion: T[String] = scala3NamedTuples
78+
def millSourcePath: os.Path = scalasql(scala3).millSourcePath / "namedtuples"
79+
def moduleDeps: Seq[PublishModule] = Seq(scalasql(scala3))
80+
81+
// override def scalacOptions: Target[Seq[String]] = T {
82+
// super.scalacOptions() :+ "-Xprint:inlining"
83+
// }
84+
85+
object test extends CommonTest {
86+
def resources = scalasql(scala3).test.resources
87+
def moduleDeps = super.moduleDeps ++ Seq(scalasql(scala3), scalasql(scala3).test)
88+
def recordedTestsFile: String = "recordedTestsNT.json"
89+
def recordedSuiteDescriptionsFile: String = "recordedSuiteDescriptionsNT.json"
90+
}
91+
}
92+
93+
object scalasql extends Cross[ScalaSql](scalaVersions)
94+
trait ScalaSql extends Common { common =>
95+
def moduleDeps = Seq(query, operations)
96+
def ivyDeps = Agg.empty[Dep] ++ Option.when(scalaVersion().startsWith("2."))(
97+
ivy"org.scala-lang:scala-reflect:${scalaVersion()}"
98+
)
99+
100+
override def consoleScalacOptions: T[Seq[String]] = Seq("-Xprint:typer")
101+
102+
object test extends CommonTest {
103+
def recordedTestsFile: String = "recordedTests.json"
104+
def recordedSuiteDescriptionsFile: String = "recordedSuiteDescriptions.json"
68105
}
69106

70107
private def indent(code: Iterable[String]): String =
@@ -74,15 +111,14 @@ trait ScalaSql extends Common{ common =>
74111
def ivyDeps = Agg(
75112
ivy"com.lihaoyi::geny:1.0.0",
76113
ivy"com.lihaoyi::sourcecode:0.3.1",
77-
ivy"com.lihaoyi::pprint:0.8.1",
114+
ivy"com.lihaoyi::pprint:0.8.1"
78115
) ++ Option.when(scalaVersion().startsWith("2."))(
79116
ivy"org.scala-lang:scala-reflect:${scalaVersion()}"
80117
)
81118

82119
def generatedSources = T {
83120
def commaSep0(i: Int, f: Int => String) = Range.inclusive(1, i).map(f).mkString(", ")
84121

85-
86122
val queryableRowDefs = for (i <- Range.inclusive(2, 22)) yield {
87123
def commaSep(f: Int => String) = commaSep0(i, f)
88124
s"""implicit def Tuple${i}Queryable[${commaSep(j => s"Q$j")}, ${commaSep(j => s"R$j")}](
@@ -98,7 +134,6 @@ trait ScalaSql extends Common{ common =>
98134
|}""".stripMargin
99135
}
100136

101-
102137
os.write(
103138
T.dest / "Generated.scala",
104139
s"""package scalasql.core.generated
@@ -113,15 +148,13 @@ trait ScalaSql extends Common{ common =>
113148

114149
}
115150

116-
117-
object operations extends Common with CrossValue{
151+
object operations extends Common with CrossValue {
118152
def moduleDeps = Seq(core)
119153
}
120154

121-
object query extends Common with CrossValue{
155+
object query extends Common with CrossValue {
122156
def moduleDeps = Seq(core)
123157

124-
125158
def generatedSources = T {
126159
def commaSep0(i: Int, f: Int => String) = Range.inclusive(1, i).map(f).mkString(", ")
127160

@@ -139,7 +172,9 @@ trait ScalaSql extends Common{ common =>
139172
| )
140173
|
141174
|""".stripMargin
142-
s"""def batched[${commaSep(j => s"T$j")}](${commaSep(j => s"f$j: V[Column] => Column[T$j]")})(
175+
s"""def batched[${commaSep(j => s"T$j")}](${commaSep(j =>
176+
s"f$j: V[Column] => Column[T$j]"
177+
)})(
143178
| items: (${commaSep(j => s"Expr[T$j]")})*
144179
|)(implicit qr: Queryable[V[Column], R]): scalasql.query.InsertColumns[V, R] $impl""".stripMargin
145180
}
@@ -165,12 +200,15 @@ trait ScalaSql extends Common{ common =>
165200

166201
val commaSepQ = commaSep(j => s"Q$j")
167202
val commaSepR = commaSep(j => s"R$j")
168-
val joinAppendType = s"scalasql.query.JoinAppend[($commaSepQ), QA, ($commaSepQ, QA), ($commaSepR, RA)]"
203+
val joinAppendType =
204+
s"scalasql.query.JoinAppend[($commaSepQ), QA, ($commaSepQ, QA), ($commaSepR, RA)]"
169205
s"""
170206
|implicit def append$i[$commaSepQ, QA, $commaSepR, RA](
171207
| implicit qr0: Queryable.Row[($commaSepQ, QA), ($commaSepR, RA)],
172208
| @annotation.nowarn("msg=never used") qr20: Queryable.Row[QA, RA]): $joinAppendType = new $joinAppendType {
173-
| override def appendTuple(t: ($commaSepQ), v: QA): ($commaSepQ, QA) = (${commaSep(j => s"t._$j")}, v)
209+
| override def appendTuple(t: ($commaSepQ), v: QA): ($commaSepQ, QA) = (${commaSep(j =>
210+
s"t._$j"
211+
)}, v)
174212
|
175213
| def qr: Queryable.Row[($commaSepQ, QA), ($commaSepR, RA)] = qr0
176214
|}""".stripMargin

docs/generateDocs.mill

Lines changed: 61 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ def generateTutorial(sourcePath: os.Path, destPath: os.Path) = {
55
var isDocs = Option.empty[Int]
66
var isCode = false
77
val outputLines = collection.mutable.Buffer.empty[String]
8+
val snippets = collection.mutable.HashMap.empty[String, scala.collection.BufferedIterator[String]]
89
outputLines.append(generatedCodeHeader)
910
for (line <- os.read.lines(sourcePath)) {
1011
val isDocsIndex = line.indexOf("// +DOCS")
@@ -25,6 +26,24 @@ def generateTutorial(sourcePath: os.Path, destPath: os.Path) = {
2526
(suffix, isCode) match{
2627
case ("", _) => outputLines.append("")
2728

29+
case (s"// +INCLUDE SNIPPET [$key] $rest", _) =>
30+
// reuse the iterator each time,
31+
// basically assume snippets are requested in order.
32+
val sublines: scala.collection.BufferedIterator[String] = snippets.getOrElseUpdate(rest, os.read.lines(mill.api.WorkspaceRoot.workspaceRoot / os.SubPath(rest)).iterator.buffered)
33+
val start = s"// +SNIPPET [$key]"
34+
val end = s"// -SNIPPET [$key]"
35+
while (sublines.hasNext && !sublines.head.contains(start)) {
36+
sublines.next() // drop lines until we find the start
37+
}
38+
val indent = sublines.headOption.map(_.indexOf(start)).getOrElse(-1)
39+
if (indent != -1) {
40+
sublines.next() // skip the start line
41+
while (sublines.hasNext && !sublines.head.contains(end)) {
42+
outputLines.append(sublines.next().drop(indent))
43+
}
44+
} else {
45+
outputLines.append("")
46+
}
2847
case (s"// +INCLUDE $rest", _) =>
2948
os.read.lines(mill.api.WorkspaceRoot.workspaceRoot / os.SubPath(rest)).foreach(outputLines.append)
3049

@@ -50,11 +69,14 @@ def generateTutorial(sourcePath: os.Path, destPath: os.Path) = {
5069
}
5170
def generateReference(dest: os.Path, scalafmtCallback: (Seq[os.Path], os.Path) => Unit) = {
5271
def dropExprPrefix(s: String) = s.split('.').drop(2).mkString(".")
72+
def dropNTExprPrefix(s: String) = s.split('.').drop(3).mkString(".")
5373
val records = upickle.default.read[Seq[Record]](os.read.stream(mill.api.WorkspaceRoot.workspaceRoot / "out" / "recordedTests.json"))
74+
val ntRecords = upickle.default.read[Seq[Record]](os.read.stream(mill.api.WorkspaceRoot.workspaceRoot / "out" / "recordedTestsNT.json"))
5475
val suiteDescriptions = upickle.default.read[Map[String, String]](os.read.stream(mill.api.WorkspaceRoot.workspaceRoot / "out" / "recordedSuiteDescriptions.json"))
5576
.map{case (k, v) => (dropExprPrefix(k), v)}
5677

57-
val rawScalaStrs = records.flatMap(r => Seq(r.queryCodeString) ++ r.resultCodeString)
78+
val rawScalaStrs = (records ++ ntRecords)
79+
.flatMap(r => Seq(r.queryCodeString) ++ r.resultCodeString)
5880
val formattedScalaStrs = {
5981
val tmps = rawScalaStrs.map(os.temp(_, suffix = ".scala"))
6082
scalafmtCallback(tmps, mill.api.WorkspaceRoot.workspaceRoot / ".scalafmt.conf")
@@ -124,6 +146,10 @@ def generateReference(dest: os.Path, scalafmtCallback: (Seq[os.Path], os.Path) =
124146
|databases, due to differences in how each database parses SQL. These differences
125147
|are typically minor, and as long as you use the right `Dialect` for your database
126148
|ScalaSql should do the right thing for you.
149+
|
150+
|>**A note for users of `SimpleTable`**: The examples in this document assume usage of
151+
|>`Table`, with a higher kinded type parameter on a case class. If you are using
152+
|>`SimpleTable`, then the same code snippets should work by dropping `[Sc]`.
127153
|""".stripMargin
128154
)
129155
val recordsWithoutDuplicateSuites = records
@@ -132,15 +158,26 @@ def generateReference(dest: os.Path, scalafmtCallback: (Seq[os.Path], os.Path) =
132158
.sortBy(_._2.head.suiteLine)
133159
.distinctBy { case (k, v) => dropExprPrefix(k)}
134160
.map{case (k, vs) => (dropExprPrefix(k), vs.map(r => r.copy(suiteName = dropExprPrefix(r.suiteName))))}
161+
val ntRecordsWithoutDuplicateSuites = ntRecords
162+
.groupBy(_.suiteName)
163+
.toSeq
164+
.sortBy(_._2.head.suiteLine)
165+
.distinctBy { case (k, v) => dropNTExprPrefix(k)}
166+
.map{case (k, vs) => (dropNTExprPrefix(k), vs.map(r => r.copy(suiteName = dropNTExprPrefix(r.suiteName))))}
167+
.toMap
135168

136169
for((suiteName, suiteGroup) <- recordsWithoutDuplicateSuites) {
137170
val seen = mutable.Set.empty[String]
138171
outputLines.append(s"## $suiteName")
139172
outputLines.append(suiteDescriptions(suiteName))
140173
var lastSeen = ""
141-
for(r <- suiteGroup){
142-
143-
val prettyName = (r.suiteName +: r.testPath).mkString(".")
174+
var remainingNTRecords = ntRecordsWithoutDuplicateSuites
175+
.get(suiteName)
176+
.getOrElse(Seq.empty).groupBy {r =>
177+
val prettyName = (r.suiteName +: r.testPath).mkString(".")
178+
prettyName
179+
}
180+
def addRecord(r: Record, prettyName: String) = {
144181
val titleOpt =
145182
if (prettyName == lastSeen) Some("----")
146183
else if (!seen(prettyName)) Some(s"### $prettyName")
@@ -151,21 +188,29 @@ def generateReference(dest: os.Path, scalafmtCallback: (Seq[os.Path], os.Path) =
151188
lastSeen = prettyName
152189
outputLines.append(
153190
s"""$title
154-
|
155-
|${dedent(r.docs, "")}
156-
|
157-
|```scala
158-
|${scalafmt(r.queryCodeString)}
159-
|```
160-
|
161-
|${sqlFormat(r.sqlString)}
162-
|
163-
|${renderResult(r.resultCodeString)}
164-
|
165-
|""".stripMargin
191+
|
192+
|${dedent(r.docs, "")}
193+
|
194+
|```scala
195+
|${scalafmt(r.queryCodeString)}
196+
|```
197+
|
198+
|${sqlFormat(r.sqlString)}
199+
|
200+
|${renderResult(r.resultCodeString)}
201+
|
202+
|""".stripMargin
166203
)
167204
}
168205
}
206+
for(r <- suiteGroup){
207+
val prettyName = (r.suiteName +: r.testPath).mkString(".")
208+
addRecord(r, prettyName)
209+
remainingNTRecords -= prettyName
210+
}
211+
for((prettyName, rs) <- remainingNTRecords; r <- rs) {
212+
addRecord(r, prettyName)
213+
}
169214
}
170215
os.write.over(dest, outputLines.mkString("\n"))
171216
}

0 commit comments

Comments
 (0)