Next tensor typing meeting: Monday 13th
![](https://secure.gravatar.com/avatar/7b2fa384c7f1fcaef1e99d297b9a624b.jpg?s=120&d=mm&r=g)
Hi all, Our next tensor typing meeting will be *Monday the 13th at 6pm London time / 10am San Francisco time *at http://meet.google.com/fft-dzjq-ksu. Agenda is yet to be confirmed. As always, notes and recordings from past meetings are available here <https://docs.google.com/document/d/1oaG0V2ZE5BRDjd9N-Tr1N0IKGwZQcraIlZ0N8ayq...> . Thanks! Matthew
![](https://secure.gravatar.com/avatar/7b2fa384c7f1fcaef1e99d297b9a624b.jpg?s=120&d=mm&r=g)
Alright, so the main item on this month's agenda is that Elias Ellison from the PyTorch dev team will be discussing an approach that PyTorch is planning to use for annotating code with shape information, and how those annotations will be used. Give me a shout if anyone else has anything to say! On Tue, 7 Sept 2021 at 17:29, Matthew Rahtz <mrahtz@google.com> wrote:
Hi all,
Our next tensor typing meeting will be *Monday the 13th at 6pm London time / 10am San Francisco time *at http://meet.google.com/fft-dzjq-ksu. Agenda is yet to be confirmed.
As always, notes and recordings from past meetings are available here <https://docs.google.com/document/d/1oaG0V2ZE5BRDjd9N-Tr1N0IKGwZQcraIlZ0N8ayq...> .
Thanks! Matthew
![](https://secure.gravatar.com/avatar/7b2fa384c7f1fcaef1e99d297b9a624b.jpg?s=120&d=mm&r=g)
Thanks everyone for coming! The recording is available here <https://drive.google.com/file/d/1b8GMP6SY0cpFDPgtgjcjvEbO6SYerxzU/view?usp=s...> - we'll hopefully have a link directly to the slides too soon. My attempt at a summary of Elias' talk is: - PyTorch needs a way of specifying how operations and functions change shapes to help with JIT compilation - Their tentative approach: additional functions written in C++/Python which specify how a function's output shape is computed based on the input shape (e.g. in addition to a conv2d function, you'd also define a conv2d_shape function) - Some tentative ideas for how this might integrate with type checking, but nothing implemented yet One thing this approach made me wonder about is, how hard *would* it be to do something like the following with static type checkers? def conv2d(input_tensor, kernel_size, stride) -> ProceduralType[conv2d_shape]: ... def conv2d_shape(input_tensor, kernel_size, stride): # Computes the return type of conv2d based on x, kernel_size and stride I'm guessing the main difficulty would be guaranteeing that conv2d_shape really could be understood statically (e.g. that it doesn't depend on global variables)? Iiuc, PyTorch avoids this problem because conv2d_shape is technically written in TorchScript rather than Python - but could we work around this in a different way by e.g. just having the static checker bail out if it finds it can't understand the shape function? This is of course a separate question to whether we *should* consider actually doing something like this. In my mind the main argument against it is verbosity: it seems better to keep the whole type signature in one place, rather than having half of it where the function is defined and then the other half in some other function. The downside is that, as Elias pointed out, we probably won't be able to specify the full signature of some operators, e.g. einsum, with only type arithmetic operators - though my current feeling is that's a reasonable tradeoff for the reduction in complexity. On Wed, 8 Sept 2021 at 10:05, Matthew Rahtz <mrahtz@google.com> wrote:
Alright, so the main item on this month's agenda is that Elias Ellison from the PyTorch dev team will be discussing an approach that PyTorch is planning to use for annotating code with shape information, and how those annotations will be used.
Give me a shout if anyone else has anything to say!
On Tue, 7 Sept 2021 at 17:29, Matthew Rahtz <mrahtz@google.com> wrote:
Hi all,
Our next tensor typing meeting will be *Monday the 13th at 6pm London time / 10am San Francisco time *at http://meet.google.com/fft-dzjq-ksu. Agenda is yet to be confirmed.
As always, notes and recordings from past meetings are available here <https://docs.google.com/document/d/1oaG0V2ZE5BRDjd9N-Tr1N0IKGwZQcraIlZ0N8ayq...> .
Thanks! Matthew
![](https://secure.gravatar.com/avatar/7b2fa384c7f1fcaef1e99d297b9a624b.jpg?s=120&d=mm&r=g)
Oh, and our next meeting will tentatively be on *Monday the 11th of October *at the same time - *6pm London time, 10am San Francisco time.* On Wed, 15 Sept 2021 at 10:37, Matthew Rahtz <mrahtz@google.com> wrote:
Thanks everyone for coming! The recording is available here <https://drive.google.com/file/d/1b8GMP6SY0cpFDPgtgjcjvEbO6SYerxzU/view?usp=s...> - we'll hopefully have a link directly to the slides too soon. My attempt at a summary of Elias' talk is:
- PyTorch needs a way of specifying how operations and functions change shapes to help with JIT compilation - Their tentative approach: additional functions written in C++/Python which specify how a function's output shape is computed based on the input shape (e.g. in addition to a conv2d function, you'd also define a conv2d_shape function) - Some tentative ideas for how this might integrate with type checking, but nothing implemented yet
One thing this approach made me wonder about is, how hard *would* it be to do something like the following with static type checkers?
def conv2d(input_tensor, kernel_size, stride) -> ProceduralType[conv2d_shape]: ...
def conv2d_shape(input_tensor, kernel_size, stride): # Computes the return type of conv2d based on x, kernel_size and stride
I'm guessing the main difficulty would be guaranteeing that conv2d_shape really could be understood statically (e.g. that it doesn't depend on global variables)? Iiuc, PyTorch avoids this problem because conv2d_shape is technically written in TorchScript rather than Python - but could we work around this in a different way by e.g. just having the static checker bail out if it finds it can't understand the shape function?
This is of course a separate question to whether we *should* consider actually doing something like this. In my mind the main argument against it is verbosity: it seems better to keep the whole type signature in one place, rather than having half of it where the function is defined and then the other half in some other function. The downside is that, as Elias pointed out, we probably won't be able to specify the full signature of some operators, e.g. einsum, with only type arithmetic operators - though my current feeling is that's a reasonable tradeoff for the reduction in complexity.
On Wed, 8 Sept 2021 at 10:05, Matthew Rahtz <mrahtz@google.com> wrote:
Alright, so the main item on this month's agenda is that Elias Ellison from the PyTorch dev team will be discussing an approach that PyTorch is planning to use for annotating code with shape information, and how those annotations will be used.
Give me a shout if anyone else has anything to say!
On Tue, 7 Sept 2021 at 17:29, Matthew Rahtz <mrahtz@google.com> wrote:
Hi all,
Our next tensor typing meeting will be *Monday the 13th at 6pm London time / 10am San Francisco time *at http://meet.google.com/fft-dzjq-ksu. Agenda is yet to be confirmed.
As always, notes and recordings from past meetings are available here <https://docs.google.com/document/d/1oaG0V2ZE5BRDjd9N-Tr1N0IKGwZQcraIlZ0N8ayq...> .
Thanks! Matthew
![](https://secure.gravatar.com/avatar/7b2fa384c7f1fcaef1e99d297b9a624b.jpg?s=120&d=mm&r=g)
Update: slides here <https://docs.google.com/presentation/d/1dRCknOI0Ic6oi_vdmj_QVOMBkBU2mutrQHdc...> On Wed, 15 Sept 2021 at 10:38, Matthew Rahtz <mrahtz@google.com> wrote:
Oh, and our next meeting will tentatively be on *Monday the 11th of October *at the same time - *6pm London time, 10am San Francisco time.*
On Wed, 15 Sept 2021 at 10:37, Matthew Rahtz <mrahtz@google.com> wrote:
Thanks everyone for coming! The recording is available here <https://drive.google.com/file/d/1b8GMP6SY0cpFDPgtgjcjvEbO6SYerxzU/view?usp=s...> - we'll hopefully have a link directly to the slides too soon. My attempt at a summary of Elias' talk is:
- PyTorch needs a way of specifying how operations and functions change shapes to help with JIT compilation - Their tentative approach: additional functions written in C++/Python which specify how a function's output shape is computed based on the input shape (e.g. in addition to a conv2d function, you'd also define a conv2d_shape function) - Some tentative ideas for how this might integrate with type checking, but nothing implemented yet
One thing this approach made me wonder about is, how hard *would* it be to do something like the following with static type checkers?
def conv2d(input_tensor, kernel_size, stride) -> ProceduralType[conv2d_shape]: ...
def conv2d_shape(input_tensor, kernel_size, stride): # Computes the return type of conv2d based on x, kernel_size and stride
I'm guessing the main difficulty would be guaranteeing that conv2d_shape really could be understood statically (e.g. that it doesn't depend on global variables)? Iiuc, PyTorch avoids this problem because conv2d_shape is technically written in TorchScript rather than Python - but could we work around this in a different way by e.g. just having the static checker bail out if it finds it can't understand the shape function?
This is of course a separate question to whether we *should* consider actually doing something like this. In my mind the main argument against it is verbosity: it seems better to keep the whole type signature in one place, rather than having half of it where the function is defined and then the other half in some other function. The downside is that, as Elias pointed out, we probably won't be able to specify the full signature of some operators, e.g. einsum, with only type arithmetic operators - though my current feeling is that's a reasonable tradeoff for the reduction in complexity.
On Wed, 8 Sept 2021 at 10:05, Matthew Rahtz <mrahtz@google.com> wrote:
Alright, so the main item on this month's agenda is that Elias Ellison from the PyTorch dev team will be discussing an approach that PyTorch is planning to use for annotating code with shape information, and how those annotations will be used.
Give me a shout if anyone else has anything to say!
On Tue, 7 Sept 2021 at 17:29, Matthew Rahtz <mrahtz@google.com> wrote:
Hi all,
Our next tensor typing meeting will be *Monday the 13th at 6pm London time / 10am San Francisco time *at http://meet.google.com/fft-dzjq-ksu. Agenda is yet to be confirmed.
As always, notes and recordings from past meetings are available here <https://docs.google.com/document/d/1oaG0V2ZE5BRDjd9N-Tr1N0IKGwZQcraIlZ0N8ayq...> .
Thanks! Matthew
participants (1)
-
Matthew Rahtz