[dev.boringcrypto] all: merge master into dev.boringcrypto

Change-Id: I4e09d4f2cc77c4c2dc12f1ff40d8c36053ab7ab6
This commit is contained in:
David Chase 2022-03-07 18:27:14 -05:00
commit f492793839
760 changed files with 11723 additions and 10926 deletions

View file

@ -120,6 +120,7 @@ Alex Kohler <alexjohnkohler@gmail.com>
Alex Myasoedov <msoedov@gmail.com> Alex Myasoedov <msoedov@gmail.com>
Alex Opie <amtopie@gmail.com> Alex Opie <amtopie@gmail.com>
Alex Plugaru <alex@plugaru.org> <alexandru.plugaru@gmail.com> Alex Plugaru <alex@plugaru.org> <alexandru.plugaru@gmail.com>
Alex Schade <39062967+aschade92@users.noreply.github.com>
Alex Schroeder <alex@gnu.org> Alex Schroeder <alex@gnu.org>
Alex Sergeyev <abc@alexsergeyev.com> Alex Sergeyev <abc@alexsergeyev.com>
Alex Tokarev <aleksator@gmail.com> Alex Tokarev <aleksator@gmail.com>
@ -135,6 +136,7 @@ Alexander Klauer <Alexander.Klauer@googlemail.com>
Alexander Kucherenko <alxkchr@gmail.com> Alexander Kucherenko <alxkchr@gmail.com>
Alexander Larsson <alexander.larsson@gmail.com> Alexander Larsson <alexander.larsson@gmail.com>
Alexander Lourier <aml@rulezz.ru> Alexander Lourier <aml@rulezz.ru>
Alexander Melentyev <alexander@melentyev.org>
Alexander Menzhinsky <amenzhinsky@gmail.com> Alexander Menzhinsky <amenzhinsky@gmail.com>
Alexander Morozov <lk4d4math@gmail.com> Alexander Morozov <lk4d4math@gmail.com>
Alexander Neumann <alexander@bumpern.de> Alexander Neumann <alexander@bumpern.de>
@ -145,6 +147,7 @@ Alexander Polcyn <apolcyn@google.com>
Alexander Rakoczy <alex@golang.org> Alexander Rakoczy <alex@golang.org>
Alexander Reece <awreece@gmail.com> Alexander Reece <awreece@gmail.com>
Alexander Surma <surma@surmair.de> Alexander Surma <surma@surmair.de>
Alexander Yastrebov <yastrebov.alex@gmail.com>
Alexander Zhavnerchik <alex.vizor@gmail.com> Alexander Zhavnerchik <alex.vizor@gmail.com>
Alexander Zillion <alex@alexzillion.com> Alexander Zillion <alex@alexzillion.com>
Alexander Zolotov <goldifit@gmail.com> Alexander Zolotov <goldifit@gmail.com>
@ -179,6 +182,7 @@ Alok Menghrajani <alok.menghrajani@gmail.com>
Alwin Doss <alwindoss84@gmail.com> Alwin Doss <alwindoss84@gmail.com>
Aman Gupta <aman@tmm1.net> Aman Gupta <aman@tmm1.net>
Amarjeet Anand <amarjeetanandsingh@gmail.com> Amarjeet Anand <amarjeetanandsingh@gmail.com>
Amelia Downs <adowns@vmware.com>
Amir Mohammad Saied <amir@gluegadget.com> Amir Mohammad Saied <amir@gluegadget.com>
Amit Kumar <mittalmailbox@gmail.com> Amit Kumar <mittalmailbox@gmail.com>
Amr Mohammed <merodiro@gmail.com> Amr Mohammed <merodiro@gmail.com>
@ -191,6 +195,7 @@ Anatol Pomozov <anatol.pomozov@gmail.com>
Anders Pearson <anders@columbia.edu> Anders Pearson <anders@columbia.edu>
Anderson Queiroz <contato@andersonq.eti.br> Anderson Queiroz <contato@andersonq.eti.br>
André Carvalho <asantostc@gmail.com> André Carvalho <asantostc@gmail.com>
Andre Marianiello <andremarianiello@users.noreply.github.com>
André Martins <aanm90@gmail.com> André Martins <aanm90@gmail.com>
Andre Nathan <andrenth@gmail.com> Andre Nathan <andrenth@gmail.com>
Andrea Nodari <andrea.nodari91@gmail.com> Andrea Nodari <andrea.nodari91@gmail.com>
@ -221,6 +226,7 @@ Andrew Gerrand <adg@golang.org>
Andrew Harding <andrew@spacemonkey.com> Andrew Harding <andrew@spacemonkey.com>
Andrew Jackura <ajackura@google.com> Andrew Jackura <ajackura@google.com>
Andrew Kemm <andrewkemm@gmail.com> Andrew Kemm <andrewkemm@gmail.com>
Andrew LeFevre <capnspacehook@gmail.com>
Andrew Louis <alouis@digitalocean.com> Andrew Louis <alouis@digitalocean.com>
Andrew Lutomirski <andy@luto.us> Andrew Lutomirski <andy@luto.us>
Andrew Medvedev <andrew.y.medvedev@gmail.com> Andrew Medvedev <andrew.y.medvedev@gmail.com>
@ -234,6 +240,7 @@ Andrew Stormont <astormont@racktopsystems.com>
Andrew Stribblehill <ads@wompom.org> Andrew Stribblehill <ads@wompom.org>
Andrew Szeto <andrew@jabagawee.com> Andrew Szeto <andrew@jabagawee.com>
Andrew Todd <andrew.todd@wework.com> Andrew Todd <andrew.todd@wework.com>
Andrew Wansink <wansink@uber.com>
Andrew Werner <andrew@upthere.com> <awerner32@gmail.com> Andrew Werner <andrew@upthere.com> <awerner32@gmail.com>
Andrew Wilkins <axwalk@gmail.com> Andrew Wilkins <axwalk@gmail.com>
Andrew Williams <williams.andrew@gmail.com> Andrew Williams <williams.andrew@gmail.com>
@ -283,6 +290,7 @@ Antonio Bibiano <antbbn@gmail.com>
Antonio Garcia <garcia.olais@gmail.com> Antonio Garcia <garcia.olais@gmail.com>
Antonio Huete Jimenez <tuxillo@quantumachine.net> Antonio Huete Jimenez <tuxillo@quantumachine.net>
Antonio Murdaca <runcom@redhat.com> Antonio Murdaca <runcom@redhat.com>
Antonio Ojea <antonio.ojea.garcia@gmail.com>
Antonio Troina <thoeni@gmail.com> Antonio Troina <thoeni@gmail.com>
Anze Kolar <me@akolar.com> Anze Kolar <me@akolar.com>
Aofei Sheng <aofei@aofeisheng.com> Aofei Sheng <aofei@aofeisheng.com>
@ -290,6 +298,7 @@ Apisak Darakananda <pongad@gmail.com>
Aram Hăvărneanu <aram@mgk.ro> Aram Hăvărneanu <aram@mgk.ro>
Araragi Hokuto <kanseihonbucho@protonmail.com> Araragi Hokuto <kanseihonbucho@protonmail.com>
Arash Bina <arash@arash.io> Arash Bina <arash@arash.io>
Archana Ravindar <aravind5@in.ibm.com>
Arda Güçlü <ardaguclu@gmail.com> Arda Güçlü <ardaguclu@gmail.com>
Areski Belaid <areski@gmail.com> Areski Belaid <areski@gmail.com>
Ariel Mashraki <ariel@mashraki.co.il> Ariel Mashraki <ariel@mashraki.co.il>
@ -299,6 +308,7 @@ Arnaud Ysmal <arnaud.ysmal@gmail.com>
Arne Hormann <arnehormann@gmail.com> Arne Hormann <arnehormann@gmail.com>
Arnout Engelen <arnout@bzzt.net> Arnout Engelen <arnout@bzzt.net>
Aron Nopanen <aron.nopanen@gmail.com> Aron Nopanen <aron.nopanen@gmail.com>
Arran Walker <arran.walker@fiveturns.org>
Artem Alekseev <artem.alekseev@intel.com> Artem Alekseev <artem.alekseev@intel.com>
Artem Khvastunov <artem.khvastunov@jetbrains.com> Artem Khvastunov <artem.khvastunov@jetbrains.com>
Artem Kolin <artemkaxboy@gmail.com> Artem Kolin <artemkaxboy@gmail.com>
@ -337,6 +347,7 @@ Balaram Makam <bmakam.qdt@qualcommdatacenter.com>
Balazs Lecz <leczb@google.com> Balazs Lecz <leczb@google.com>
Baokun Lee <nototon@gmail.com> <bk@golangcn.org> Baokun Lee <nototon@gmail.com> <bk@golangcn.org>
Barnaby Keene <accounts@southcla.ws> Barnaby Keene <accounts@southcla.ws>
Bartłomiej Klimczak <bartlomiej.klimczak88@gmail.com>
Bartosz Grzybowski <melkorm@gmail.com> Bartosz Grzybowski <melkorm@gmail.com>
Bartosz Oler <brtsz@google.com> Bartosz Oler <brtsz@google.com>
Bassam Ojeil <bojeil@google.com> Bassam Ojeil <bojeil@google.com>
@ -368,6 +379,7 @@ Benny Siegert <bsiegert@gmail.com>
Benoit Sigoure <tsunanet@gmail.com> Benoit Sigoure <tsunanet@gmail.com>
Berengar Lehr <Berengar.Lehr@gmx.de> Berengar Lehr <Berengar.Lehr@gmx.de>
Berkant Ipek <41230766+0xbkt@users.noreply.github.com> Berkant Ipek <41230766+0xbkt@users.noreply.github.com>
Beth Brown <ecbrown@google.com>
Bharath Kumar Uppala <uppala.bharath@gmail.com> Bharath Kumar Uppala <uppala.bharath@gmail.com>
Bharath Thiruveedula <tbharath91@gmail.com> Bharath Thiruveedula <tbharath91@gmail.com>
Bhavin Gandhi <bhavin7392@gmail.com> Bhavin Gandhi <bhavin7392@gmail.com>
@ -430,6 +442,7 @@ Brian Ketelsen <bketelsen@gmail.com>
Brian Slesinsky <skybrian@google.com> Brian Slesinsky <skybrian@google.com>
Brian Smith <ohohvi@gmail.com> Brian Smith <ohohvi@gmail.com>
Brian Starke <brian.starke@gmail.com> Brian Starke <brian.starke@gmail.com>
Bruce Huang <helbingxxx@gmail.com>
Bryan Alexander <Kozical@msn.com> Bryan Alexander <Kozical@msn.com>
Bryan Boreham <bjboreham@gmail.com> Bryan Boreham <bjboreham@gmail.com>
Bryan C. Mills <bcmills@google.com> Bryan C. Mills <bcmills@google.com>
@ -482,17 +495,21 @@ Charles Kenney <charlesc.kenney@gmail.com>
Charles L. Dorian <cldorian@gmail.com> Charles L. Dorian <cldorian@gmail.com>
Charles Lee <zombie.fml@gmail.com> Charles Lee <zombie.fml@gmail.com>
Charles Weill <weill@google.com> Charles Weill <weill@google.com>
Charlie Getzen <charlie@bolt.com>
Charlie Moog <moogcharlie@gmail.com> Charlie Moog <moogcharlie@gmail.com>
Charlotte Brandhorst-Satzkorn <catzkorn@gmail.com> Charlotte Brandhorst-Satzkorn <catzkorn@gmail.com>
Chauncy Cullitan <chauncyc@google.com> Chauncy Cullitan <chauncyc@google.com>
Chen Zhidong <njutczd@gmail.com> Chen Zhidong <njutczd@gmail.com>
Chen Zhihan <energiehund@gmail.com> Chen Zhihan <energiehund@gmail.com>
Cheng Wang <wangchengiscool@gmail.com>
Cherry Mui <cherryyz@google.com> Cherry Mui <cherryyz@google.com>
Chew Choon Keat <choonkeat@gmail.com> Chew Choon Keat <choonkeat@gmail.com>
Chia-Chi Hsu <wuchi5457@gmail.com>
Chiawen Chen <golopot@gmail.com> Chiawen Chen <golopot@gmail.com>
Chirag Sukhala <cchirag77@gmail.com> Chirag Sukhala <cchirag77@gmail.com>
Cholerae Hu <choleraehyq@gmail.com> Cholerae Hu <choleraehyq@gmail.com>
Chotepud Teo <AlexRouSg@users.noreply.github.com> Chotepud Teo <AlexRouSg@users.noreply.github.com>
Chressie Himpel <chressie@google.com>
Chris Ball <chris@printf.net> Chris Ball <chris@printf.net>
Chris Biscardi <chris@christopherbiscardi.com> Chris Biscardi <chris@christopherbiscardi.com>
Chris Broadfoot <cbro@golang.org> Chris Broadfoot <cbro@golang.org>
@ -570,6 +587,7 @@ Cuong Manh Le <cuong@orijtech.com>
Curtis La Graff <curtis@lagraff.me> Curtis La Graff <curtis@lagraff.me>
Cyrill Schumacher <cyrill@schumacher.fm> Cyrill Schumacher <cyrill@schumacher.fm>
Dai Jie <gzdaijie@gmail.com> Dai Jie <gzdaijie@gmail.com>
Dai Wentao <dwt136@gmail.com>
Daisuke Fujita <dtanshi45@gmail.com> Daisuke Fujita <dtanshi45@gmail.com>
Daisuke Suzuki <daisuzu@gmail.com> Daisuke Suzuki <daisuzu@gmail.com>
Daker Fernandes Pinheiro <daker.fernandes.pinheiro@intel.com> Daker Fernandes Pinheiro <daker.fernandes.pinheiro@intel.com>
@ -603,6 +621,7 @@ Daniel Langner <s8572327@gmail.com>
Daniel Lidén <daniel.liden.87@gmail.com> Daniel Lidén <daniel.liden.87@gmail.com>
Daniel Lublin <daniel@lublin.se> Daniel Lublin <daniel@lublin.se>
Daniel Mangum <georgedanielmangum@gmail.com> Daniel Mangum <georgedanielmangum@gmail.com>
Daniel Marshall <daniel.marshall2@ibm.com>
Daniel Martí <mvdan@mvdan.cc> Daniel Martí <mvdan@mvdan.cc>
Daniel McCarney <cpu@letsencrypt.org> Daniel McCarney <cpu@letsencrypt.org>
Daniel Morsing <daniel.morsing@gmail.com> Daniel Morsing <daniel.morsing@gmail.com>
@ -727,6 +746,7 @@ Dmitry Mottl <dmitry.mottl@gmail.com>
Dmitry Neverov <dmitry.neverov@gmail.com> Dmitry Neverov <dmitry.neverov@gmail.com>
Dmitry Savintsev <dsavints@gmail.com> Dmitry Savintsev <dsavints@gmail.com>
Dmitry Yakunin <nonamezeil@gmail.com> Dmitry Yakunin <nonamezeil@gmail.com>
Dmytro Shynkevych <dm.shynk@gmail.com>
Doga Fincan <doga@icloud.com> Doga Fincan <doga@icloud.com>
Domas Tamašauskas <puerdomus@gmail.com> Domas Tamašauskas <puerdomus@gmail.com>
Domen Ipavec <domen@ipavec.net> Domen Ipavec <domen@ipavec.net>
@ -751,6 +771,7 @@ Dustin Herbison <djherbis@gmail.com>
Dustin Long <dustmop@gmail.com> Dustin Long <dustmop@gmail.com>
Dustin Sallings <dsallings@gmail.com> Dustin Sallings <dsallings@gmail.com>
Dustin Shields-Cloues <dcloues@gmail.com> Dustin Shields-Cloues <dcloues@gmail.com>
Dustin Spicuzza <dustin.spicuzza@gmail.com>
Dvir Volk <dvir@everything.me> <dvirsky@gmail.com> Dvir Volk <dvir@everything.me> <dvirsky@gmail.com>
Dylan Waits <dylan@waits.io> Dylan Waits <dylan@waits.io>
Ed Schouten <ed@nuxi.nl> Ed Schouten <ed@nuxi.nl>
@ -810,9 +831,11 @@ Erin Masatsugu <erin.masatsugu@gmail.com>
Ernest Chiang <ernest_chiang@htc.com> Ernest Chiang <ernest_chiang@htc.com>
Erwin Oegema <blablaechthema@hotmail.com> Erwin Oegema <blablaechthema@hotmail.com>
Esko Luontola <esko.luontola@gmail.com> Esko Luontola <esko.luontola@gmail.com>
Ethan Anderson <eanderson@atlassian.com>
Ethan Burns <eaburns@google.com> Ethan Burns <eaburns@google.com>
Ethan Hur <ethan0311@gmail.com> Ethan Hur <ethan0311@gmail.com>
Ethan Miller <eamiller@us.ibm.com> Ethan Miller <eamiller@us.ibm.com>
Ethan Reesor <ethan.reesor@gmail.com>
Euan Kemp <euank@euank.com> Euan Kemp <euank@euank.com>
Eugene Formanenko <mo4islona@gmail.com> Eugene Formanenko <mo4islona@gmail.com>
Eugene Kalinin <e.v.kalinin@gmail.com> Eugene Kalinin <e.v.kalinin@gmail.com>
@ -831,8 +854,10 @@ Evgeniy Polyakov <zbr@ioremap.net>
Ewan Chou <coocood@gmail.com> Ewan Chou <coocood@gmail.com>
Ewan Valentine <ewan.valentine89@gmail.com> Ewan Valentine <ewan.valentine89@gmail.com>
Eyal Posener <posener@gmail.com> Eyal Posener <posener@gmail.com>
F. Talha Altınel <talhaaltinel@hotmail.com>
Fabian Wickborn <fabian@wickborn.net> Fabian Wickborn <fabian@wickborn.net>
Fabian Zaremba <fabian@youremail.eu> Fabian Zaremba <fabian@youremail.eu>
Fabio Falzoi <fabio.falzoi84@gmail.com>
Fabrizio Milo <mistobaan@gmail.com> Fabrizio Milo <mistobaan@gmail.com>
Faiyaz Ahmed <ahmedf@vmware.com> Faiyaz Ahmed <ahmedf@vmware.com>
Fan Hongjian <fan.howard@gmail.com> Fan Hongjian <fan.howard@gmail.com>
@ -861,21 +886,25 @@ Firmansyah Adiputra <frm.adiputra@gmail.com>
Florian Forster <octo@google.com> Florian Forster <octo@google.com>
Florian Uekermann <florian@uekermann-online.de> <f1@uekermann-online.de> Florian Uekermann <florian@uekermann-online.de> <f1@uekermann-online.de>
Florian Weimer <fw@deneb.enyo.de> Florian Weimer <fw@deneb.enyo.de>
Florin Papa <fpapa@google.com>
Florin Patan <florinpatan@gmail.com> Florin Patan <florinpatan@gmail.com>
Folke Behrens <folke@google.com> Folke Behrens <folke@google.com>
Ford Hurley <ford.hurley@gmail.com> Ford Hurley <ford.hurley@gmail.com>
Forest Johnson <forest.n.johnson@gmail.com>
Francesc Campoy <campoy@golang.org> Francesc Campoy <campoy@golang.org>
Francesco Guardiani <francescoguard@gmail.com> Francesco Guardiani <francescoguard@gmail.com>
Francesco Renzi <rentziass@gmail.com> Francesco Renzi <rentziass@gmail.com>
Francisco Claude <fclaude@recoded.cl> Francisco Claude <fclaude@recoded.cl>
Francisco Rojas <francisco.rojas.gallegos@gmail.com> Francisco Rojas <francisco.rojas.gallegos@gmail.com>
Francisco Souza <franciscossouza@gmail.com> Francisco Souza <franciscossouza@gmail.com>
Frank Chiarulli Jr <frank@frankchiarulli.com>
Frank Schroeder <frank.schroeder@gmail.com> Frank Schroeder <frank.schroeder@gmail.com>
Frank Somers <fsomers@arista.com> Frank Somers <fsomers@arista.com>
Frederic Guillot <frederic.guillot@gmail.com> Frederic Guillot <frederic.guillot@gmail.com>
Frederick Kelly Mayle III <frederickmayle@gmail.com> Frederick Kelly Mayle III <frederickmayle@gmail.com>
Frederik Ring <frederik.ring@gmail.com> Frederik Ring <frederik.ring@gmail.com>
Frederik Zipp <fzipp@gmx.de> Frederik Zipp <fzipp@gmx.de>
Frediano Ziglio <freddy77@gmail.com>
Fredrik Enestad <fredrik.enestad@soundtrackyourbrand.com> Fredrik Enestad <fredrik.enestad@soundtrackyourbrand.com>
Fredrik Forsmo <fredrik.forsmo@gmail.com> Fredrik Forsmo <fredrik.forsmo@gmail.com>
Fredrik Wallgren <fredrik.wallgren@gmail.com> Fredrik Wallgren <fredrik.wallgren@gmail.com>
@ -914,6 +943,7 @@ Geon Kim <geon0250@gmail.com>
Georg Reinke <guelfey@gmail.com> Georg Reinke <guelfey@gmail.com>
George Gkirtsou <ggirtsou@gmail.com> George Gkirtsou <ggirtsou@gmail.com>
George Hartzell <hartzell@alerce.com> George Hartzell <hartzell@alerce.com>
George Looshch <looshch@loosh.ch>
George Shammas <george@shamm.as> <georgyo@gmail.com> George Shammas <george@shamm.as> <georgyo@gmail.com>
George Tsilias <tsiliasg@gmail.com> George Tsilias <tsiliasg@gmail.com>
Gerasimos (Makis) Maropoulos <kataras2006@hotmail.com> Gerasimos (Makis) Maropoulos <kataras2006@hotmail.com>
@ -954,19 +984,27 @@ GitHub User @fatedier (7346661) <fatedier@gmail.com>
GitHub User @frennkie (6499251) <mail@rhab.de> GitHub User @frennkie (6499251) <mail@rhab.de>
GitHub User @geedchin (11672310) <geedchin@gmail.com> GitHub User @geedchin (11672310) <geedchin@gmail.com>
GitHub User @GrigoriyMikhalkin (3637857) <grigoriymikhalkin@gmail.com> GitHub User @GrigoriyMikhalkin (3637857) <grigoriymikhalkin@gmail.com>
GitHub User @Gusted (25481501) <williamzijl7@hotmail.com>
GitHub User @hengwu0 (41297446) <41297446+hengwu0@users.noreply.github.com> GitHub User @hengwu0 (41297446) <41297446+hengwu0@users.noreply.github.com>
GitHub User @hitzhangjie (3725760) <hit.zhangjie@gmail.com> GitHub User @hitzhangjie (3725760) <hit.zhangjie@gmail.com>
GitHub User @hkhere (33268704) <33268704+hkhere@users.noreply.github.com>
GitHub User @hopehook (7326168) <hopehook.com@gmail.com>
GitHub User @hqpko (13887251) <whaibin01@hotmail.com> GitHub User @hqpko (13887251) <whaibin01@hotmail.com>
GitHub User @Illirgway (5428603) <illirgway@gmail.com>
GitHub User @itchyny (375258) <itchyny@hatena.ne.jp> GitHub User @itchyny (375258) <itchyny@hatena.ne.jp>
GitHub User @jinmiaoluo (39730824) <jinmiaoluo@icloud.com> GitHub User @jinmiaoluo (39730824) <jinmiaoluo@icloud.com>
GitHub User @jopbrown (6345470) <msshane2008@gmail.com> GitHub User @jopbrown (6345470) <msshane2008@gmail.com>
GitHub User @kazyshr (30496953) <kazyshr0301@gmail.com> GitHub User @kazyshr (30496953) <kazyshr0301@gmail.com>
GitHub User @kc1212 (1093806) <kc1212@users.noreply.github.com> GitHub User @kc1212 (1093806) <kc1212@users.noreply.github.com>
GitHub User @komisan19 (18901496) <komiyama6219@gmail.com> GitHub User @komisan19 (18901496) <komiyama6219@gmail.com>
GitHub User @korzhao (64203902) <korzhao95@gmail.com>
GitHub User @Kropekk (13366453) <kamilkropiewnicki@gmail.com> GitHub User @Kropekk (13366453) <kamilkropiewnicki@gmail.com>
GitHub User @lgbgbl (65756378) <lgbgbl@qq.com>
GitHub User @lhl2617 (33488131) <l.h.lee2617@gmail.com> GitHub User @lhl2617 (33488131) <l.h.lee2617@gmail.com>
GitHub User @linguohua (3434367) <lghchinaidea@gmail.com> GitHub User @linguohua (3434367) <lghchinaidea@gmail.com>
GitHub User @lloydchang (1329685) <lloydchang@gmail.com>
GitHub User @LotusFenn (13775899) <fenn.lotus@gmail.com> GitHub User @LotusFenn (13775899) <fenn.lotus@gmail.com>
GitHub User @luochuanhang (96416201) <chuanhangluo@gmail.com>
GitHub User @ly303550688 (11519839) <yang.liu636@gmail.com> GitHub User @ly303550688 (11519839) <yang.liu636@gmail.com>
GitHub User @madiganz (18340029) <zacharywmadigan@gmail.com> GitHub User @madiganz (18340029) <zacharywmadigan@gmail.com>
GitHub User @maltalex (10195391) <code@bit48.net> GitHub User @maltalex (10195391) <code@bit48.net>
@ -976,6 +1014,7 @@ GitHub User @micnncim (21333876) <micnncim@gmail.com>
GitHub User @mkishere (224617) <224617+mkishere@users.noreply.github.com> GitHub User @mkishere (224617) <224617+mkishere@users.noreply.github.com>
GitHub User @nu50218 (40682920) <nu_ll@icloud.com> GitHub User @nu50218 (40682920) <nu_ll@icloud.com>
GitHub User @OlgaVlPetrova (44112727) <OVPpetrova@gmail.com> GitHub User @OlgaVlPetrova (44112727) <OVPpetrova@gmail.com>
GitHub User @pierwill (19642016) <pierwill@users.noreply.github.com>
GitHub User @pityonline (438222) <pityonline@gmail.com> GitHub User @pityonline (438222) <pityonline@gmail.com>
GitHub User @po3rin (29445112) <abctail30@gmail.com> GitHub User @po3rin (29445112) <abctail30@gmail.com>
GitHub User @pokutuna (57545) <popopopopokutuna@gmail.com> GitHub User @pokutuna (57545) <popopopopokutuna@gmail.com>
@ -983,13 +1022,18 @@ GitHub User @povsister (11040951) <pov@mahou-shoujo.moe>
GitHub User @pytimer (17105586) <lixin20101023@gmail.com> GitHub User @pytimer (17105586) <lixin20101023@gmail.com>
GitHub User @qcrao (7698088) <qcrao91@gmail.com> GitHub User @qcrao (7698088) <qcrao91@gmail.com>
GitHub User @ramenjuniti (32011829) <ramenjuniti@gmail.com> GitHub User @ramenjuniti (32011829) <ramenjuniti@gmail.com>
GitHub User @renthraysk (30576707) <renthraysk@gmail.com>
GitHub User @roudkerk (52280478) <roudkerk@google.com>
GitHub User @saitarunreddy (21041941) <saitarunreddypalla@gmail.com> GitHub User @saitarunreddy (21041941) <saitarunreddypalla@gmail.com>
GitHub User @SataQiu (9354727) <shidaqiu2018@gmail.com> GitHub User @SataQiu (9354727) <shidaqiu2018@gmail.com>
GitHub User @seifchen (23326132) <chenxuefeng1207@gmail.com>
GitHub User @shogo-ma (9860598) <Choroma194@gmail.com> GitHub User @shogo-ma (9860598) <Choroma194@gmail.com>
GitHub User @sivchari (55221074) <shibuuuu5@gmail.com> GitHub User @sivchari (55221074) <shibuuuu5@gmail.com>
GitHub User @skanehira (7888591) <sho19921005@gmail.com> GitHub User @skanehira (7888591) <sho19921005@gmail.com>
GitHub User @soolaugust (10558124) <soolaugust@gmail.com> GitHub User @soolaugust (10558124) <soolaugust@gmail.com>
GitHub User @surechen (7249331) <surechen17@gmail.com> GitHub User @surechen (7249331) <surechen17@gmail.com>
GitHub User @syumai (6882878) <syumai@gmail.com>
GitHub User @tangxi666 (48145175) <tx1275044634@gmail.com>
GitHub User @tatsumack (4510569) <tatsu.mack@gmail.com> GitHub User @tatsumack (4510569) <tatsu.mack@gmail.com>
GitHub User @tell-k (26263) <ffk2005@gmail.com> GitHub User @tell-k (26263) <ffk2005@gmail.com>
GitHub User @tennashi (10219626) <tennashio@gmail.com> GitHub User @tennashi (10219626) <tennashio@gmail.com>
@ -999,6 +1043,7 @@ GitHub User @unbyte (5772358) <i@shangyes.net>
GitHub User @uropek (39370426) <uropek@gmail.com> GitHub User @uropek (39370426) <uropek@gmail.com>
GitHub User @utkarsh-extc (53217283) <utkarsh.extc@gmail.com> GitHub User @utkarsh-extc (53217283) <utkarsh.extc@gmail.com>
GitHub User @witchard (4994659) <witchard@hotmail.co.uk> GitHub User @witchard (4994659) <witchard@hotmail.co.uk>
GitHub User @wmdngngng (22067700) <wangmingdong@gmail.com>
GitHub User @wolf1996 (5901874) <ksgiv37@gmail.com> GitHub User @wolf1996 (5901874) <ksgiv37@gmail.com>
GitHub User @yah01 (12216890) <kagaminehuan@gmail.com> GitHub User @yah01 (12216890) <kagaminehuan@gmail.com>
GitHub User @yuanhh (1298735) <yuan415030@gmail.com> GitHub User @yuanhh (1298735) <yuan415030@gmail.com>
@ -1029,12 +1074,14 @@ Guilherme Garnier <guilherme.garnier@gmail.com>
Guilherme Goncalves <guilhermeaugustosg@gmail.com> Guilherme Goncalves <guilhermeaugustosg@gmail.com>
Guilherme Rezende <guilhermebr@gmail.com> Guilherme Rezende <guilhermebr@gmail.com>
Guilherme Souza <32180229+gqgs@users.noreply.github.com> Guilherme Souza <32180229+gqgs@users.noreply.github.com>
Guillaume Blaquiere <guillaume.blaquiere@gmail.com>
Guillaume J. Charmes <guillaume@charmes.net> Guillaume J. Charmes <guillaume@charmes.net>
Guillaume Sottas <guillaumesottas@gmail.com> Guillaume Sottas <guillaumesottas@gmail.com>
Günther Noack <gnoack@google.com> Günther Noack <gnoack@google.com>
Guobiao Mei <meiguobiao@gmail.com> Guobiao Mei <meiguobiao@gmail.com>
Guodong Li <guodongli@google.com> Guodong Li <guodongli@google.com>
Guoliang Wang <iamwgliang@gmail.com> Guoliang Wang <iamwgliang@gmail.com>
Guoqi Chen <chenguoqi@loongson.cn>
Gustav Paul <gustav.paul@gmail.com> Gustav Paul <gustav.paul@gmail.com>
Gustav Westling <gustav@westling.xyz> Gustav Westling <gustav@westling.xyz>
Gustavo Franco <gustavorfranco@gmail.com> Gustavo Franco <gustavorfranco@gmail.com>
@ -1050,6 +1097,8 @@ Hang Qian <hangqian90@gmail.com>
Hanjun Kim <hallazzang@gmail.com> Hanjun Kim <hallazzang@gmail.com>
Hanlin He <hanling.he@gmail.com> Hanlin He <hanling.he@gmail.com>
Hanlin Shi <shihanlin9@gmail.com> Hanlin Shi <shihanlin9@gmail.com>
Hans Nielsen <hans@stackallocated.com>
Hao Mou <mouhao.mu@gmail.com>
Haoran Luo <haoran.luo@chaitin.com> Haoran Luo <haoran.luo@chaitin.com>
Haosdent Huang <haosdent@gmail.com> Haosdent Huang <haosdent@gmail.com>
Harald Nordgren <haraldnordgren@gmail.com> Harald Nordgren <haraldnordgren@gmail.com>
@ -1126,6 +1175,7 @@ Igor Zhilianin <igor.zhilianin@gmail.com>
Ikko Ashimine <eltociear@gmail.com> Ikko Ashimine <eltociear@gmail.com>
Illya Yalovyy <yalovoy@gmail.com> Illya Yalovyy <yalovoy@gmail.com>
Ilya Chukov <56119080+Elias506@users.noreply.github.com> Ilya Chukov <56119080+Elias506@users.noreply.github.com>
Ilya Mateyko <me@astrophena.name>
Ilya Sinelnikov <sidhmangh@gmail.com> Ilya Sinelnikov <sidhmangh@gmail.com>
Ilya Tocar <ilya.tocar@intel.com> Ilya Tocar <ilya.tocar@intel.com>
INADA Naoki <songofacandy@gmail.com> INADA Naoki <songofacandy@gmail.com>
@ -1157,6 +1207,7 @@ Jaana Burcu Dogan <jbd@google.com> <jbd@golang.org> <burcujdogan@gmail.com>
Jaap Aarts <jaap.aarts1@gmail.com> Jaap Aarts <jaap.aarts1@gmail.com>
Jack Britton <jackxbritton@gmail.com> Jack Britton <jackxbritton@gmail.com>
Jack Lindamood <jlindamo@justin.tv> Jack Lindamood <jlindamo@justin.tv>
Jack You <jamesyou@google.com>
Jacob Baskin <jbaskin@google.com> Jacob Baskin <jbaskin@google.com>
Jacob Blain Christen <dweomer5@gmail.com> Jacob Blain Christen <dweomer5@gmail.com>
Jacob H. Haven <jacob@cloudflare.com> Jacob H. Haven <jacob@cloudflare.com>
@ -1165,6 +1216,7 @@ Jacob Walker <jacobwalker0814@gmail.com>
Jaden Teng <long.asyn@gmail.com> Jaden Teng <long.asyn@gmail.com>
Jae Kwon <jae@tendermint.com> Jae Kwon <jae@tendermint.com>
Jake B <doogie1012@gmail.com> Jake B <doogie1012@gmail.com>
Jake Ciolek <jakub@ciolek.dev>
Jakob Borg <jakob@nym.se> Jakob Borg <jakob@nym.se>
Jakob Weisblat <jakobw@mit.edu> Jakob Weisblat <jakobw@mit.edu>
Jakub Čajka <jcajka@redhat.com> Jakub Čajka <jcajka@redhat.com>
@ -1183,6 +1235,7 @@ James Eady <jmeady@google.com>
James Fennell <jpfennell@google.com> James Fennell <jpfennell@google.com>
James Fysh <james.fysh@gmail.com> James Fysh <james.fysh@gmail.com>
James Gray <james@james4k.com> James Gray <james@james4k.com>
James Harris <mailjamesharris@gmail.com>
James Hartig <fastest963@gmail.com> James Hartig <fastest963@gmail.com>
James Kasten <jdkasten@google.com> James Kasten <jdkasten@google.com>
James Lawrence <jljatone@gmail.com> James Lawrence <jljatone@gmail.com>
@ -1246,6 +1299,7 @@ Jean de Klerk <deklerk@google.com>
Jean-André Santoni <jean.andre.santoni@gmail.com> Jean-André Santoni <jean.andre.santoni@gmail.com>
Jean-François Bustarret <jf@bustarret.com> Jean-François Bustarret <jf@bustarret.com>
Jean-Francois Cantin <jfcantin@gmail.com> Jean-Francois Cantin <jfcantin@gmail.com>
Jean-Hadrien Chabran <jh@chabran.fr>
Jean-Marc Eurin <jmeurin@google.com> Jean-Marc Eurin <jmeurin@google.com>
Jean-Nicolas Moal <jn.moal@gmail.com> Jean-Nicolas Moal <jn.moal@gmail.com>
Jed Denlea <jed@fastly.com> Jed Denlea <jed@fastly.com>
@ -1260,6 +1314,7 @@ Jeff Johnson <jrjohnson@google.com>
Jeff R. Allen <jra@nella.org> <jeff.allen@gmail.com> Jeff R. Allen <jra@nella.org> <jeff.allen@gmail.com>
Jeff Sickel <jas@corpus-callosum.com> Jeff Sickel <jas@corpus-callosum.com>
Jeff Wendling <jeff@spacemonkey.com> Jeff Wendling <jeff@spacemonkey.com>
Jeff Wentworth <j.wentworth@gmail.com>
Jeff Widman <jeff@jeffwidman.com> Jeff Widman <jeff@jeffwidman.com>
Jeffrey H <jeffreyh192@gmail.com> Jeffrey H <jeffreyh192@gmail.com>
Jelte Fennema <github-tech@jeltef.nl> Jelte Fennema <github-tech@jeltef.nl>
@ -1282,6 +1337,7 @@ Jesús Espino <jespinog@gmail.com>
Jia Zhan <jzhan@uber.com> Jia Zhan <jzhan@uber.com>
Jiacai Liu <jiacai2050@gmail.com> Jiacai Liu <jiacai2050@gmail.com>
Jiahao Lu <lujjjh@gmail.com> Jiahao Lu <lujjjh@gmail.com>
Jiahua Wang <wjh180909@gmail.com>
Jianing Yu <jnyu@google.com> Jianing Yu <jnyu@google.com>
Jianqiao Li <jianqiaoli@google.com> Jianqiao Li <jianqiaoli@google.com>
Jiayu Yi <yijiayu@gmail.com> Jiayu Yi <yijiayu@gmail.com>
@ -1298,10 +1354,12 @@ Jingcheng Zhang <diogin@gmail.com>
Jingguo Yao <yaojingguo@gmail.com> Jingguo Yao <yaojingguo@gmail.com>
Jingnan Si <jingnan.si@gmail.com> Jingnan Si <jingnan.si@gmail.com>
Jinkun Zhang <franksnolf@gmail.com> Jinkun Zhang <franksnolf@gmail.com>
Jinwen Wo <wojinwen@huawei.com>
Jiong Du <londevil@gmail.com> Jiong Du <londevil@gmail.com>
Jirka Daněk <dnk@mail.muni.cz> Jirka Daněk <dnk@mail.muni.cz>
Jiulong Wang <jiulongw@gmail.com> Jiulong Wang <jiulongw@gmail.com>
Joakim Sernbrant <serbaut@gmail.com> Joakim Sernbrant <serbaut@gmail.com>
Jochen Weber <jochen.weber80@gmail.com>
Joe Bowbeer <joe.bowbeer@gmail.com> Joe Bowbeer <joe.bowbeer@gmail.com>
Joe Cortopassi <joe@joecortopassi.com> Joe Cortopassi <joe@joecortopassi.com>
Joe Farrell <joe2farrell@gmail.com> Joe Farrell <joe2farrell@gmail.com>
@ -1324,6 +1382,7 @@ Johan Euphrosine <proppy@google.com>
Johan Jansson <johan.jansson@iki.fi> Johan Jansson <johan.jansson@iki.fi>
Johan Knutzen <johan@senri.se> Johan Knutzen <johan@senri.se>
Johan Sageryd <j@1616.se> Johan Sageryd <j@1616.se>
Johannes Altmanninger <aclopte@gmail.com>
Johannes Huning <johannes.huning@gmail.com> Johannes Huning <johannes.huning@gmail.com>
John Asmuth <jasmuth@gmail.com> John Asmuth <jasmuth@gmail.com>
John Bampton <jbampton@gmail.com> John Bampton <jbampton@gmail.com>
@ -1338,10 +1397,12 @@ John Howard Palevich <jack.palevich@gmail.com>
John Jago <johnjago@protonmail.com> John Jago <johnjago@protonmail.com>
John Jeffery <jjeffery@sp.com.au> John Jeffery <jjeffery@sp.com.au>
John Jenkins <twodopeshaggy@gmail.com> John Jenkins <twodopeshaggy@gmail.com>
John Kelly <jkelly@squarespace.com>
John Leidegren <john.leidegren@gmail.com> John Leidegren <john.leidegren@gmail.com>
John McCabe <john@johnmccabe.net> John McCabe <john@johnmccabe.net>
John Moore <johnkenneth.moore@gmail.com> John Moore <johnkenneth.moore@gmail.com>
John Newlin <jnewlin@google.com> John Newlin <jnewlin@google.com>
John Olheiser <john.olheiser@gmail.com>
John Papandriopoulos <jpap.code@gmail.com> John Papandriopoulos <jpap.code@gmail.com>
John Potocny <johnp@vividcortex.com> John Potocny <johnp@vividcortex.com>
John R. Lenton <jlenton@gmail.com> John R. Lenton <jlenton@gmail.com>
@ -1382,6 +1443,7 @@ Jordan Rupprecht <rupprecht@google.com>
Jordi Martin <jordimartin@gmail.com> Jordi Martin <jordimartin@gmail.com>
Jorge Araya <jorgejavieran@yahoo.com.mx> Jorge Araya <jorgejavieran@yahoo.com.mx>
Jorge L. Fatta <jorge.fatta@auth0.com> Jorge L. Fatta <jorge.fatta@auth0.com>
Jorge Troncoso <jatron@google.com>
Jos Visser <josv@google.com> Jos Visser <josv@google.com>
Josa Gesell <josa@gesell.me> Josa Gesell <josa@gesell.me>
Jose Luis Vázquez González <josvazg@gmail.com> Jose Luis Vázquez González <josvazg@gmail.com>
@ -1508,6 +1570,7 @@ Keyuan Li <keyuanli123@gmail.com>
Kezhu Wang <kezhuw@gmail.com> Kezhu Wang <kezhuw@gmail.com>
Khosrow Moossavi <khos2ow@gmail.com> Khosrow Moossavi <khos2ow@gmail.com>
Kieran Colford <kieran@kcolford.com> Kieran Colford <kieran@kcolford.com>
Kieran Gorman <kieran.j.gorman@gmail.com>
Kim Shrier <kshrier@racktopsystems.com> Kim Shrier <kshrier@racktopsystems.com>
Kim Yongbin <kybinz@gmail.com> Kim Yongbin <kybinz@gmail.com>
Kir Kolyshkin <kolyshkin@gmail.com> Kir Kolyshkin <kolyshkin@gmail.com>
@ -1577,6 +1640,7 @@ Leonel Quinteros <leonel.quinteros@gmail.com>
Lev Shamardin <shamardin@gmail.com> Lev Shamardin <shamardin@gmail.com>
Lewin Bormann <lewin.bormann@gmail.com> Lewin Bormann <lewin.bormann@gmail.com>
Lewis Waddicor <nemesismk2@gmail.com> Lewis Waddicor <nemesismk2@gmail.com>
Li-Yu Yu <aaronyu@google.com>
Liam Haworth <liam@haworth.id.au> Liam Haworth <liam@haworth.id.au>
Lily Chung <lilithkchung@gmail.com> Lily Chung <lilithkchung@gmail.com>
Lingchao Xin <douglarek@gmail.com> Lingchao Xin <douglarek@gmail.com>
@ -1657,7 +1721,9 @@ Mark Adams <mark@markadams.me>
Mark Bucciarelli <mkbucc@gmail.com> Mark Bucciarelli <mkbucc@gmail.com>
Mark Dain <mark@markdain.net> Mark Dain <mark@markdain.net>
Mark Glines <mark@glines.org> Mark Glines <mark@glines.org>
Mark Hansen <markhansen@google.com>
Mark Harrison <marhar@google.com> Mark Harrison <marhar@google.com>
Mark Jeffery <dandare100@gmail.com>
Mark Percival <m@mdp.im> Mark Percival <m@mdp.im>
Mark Pulford <mark@kyne.com.au> Mark Pulford <mark@kyne.com.au>
Mark Rushakoff <mark.rushakoff@gmail.com> Mark Rushakoff <mark.rushakoff@gmail.com>
@ -1686,7 +1752,7 @@ Martin Hoefling <martin.hoefling@gmx.de>
Martin Kreichgauer <martinkr@google.com> Martin Kreichgauer <martinkr@google.com>
Martin Kunc <martinkunc@users.noreply.github.com> Martin Kunc <martinkunc@users.noreply.github.com>
Martin Lindhe <martin.j.lindhe@gmail.com> Martin Lindhe <martin.j.lindhe@gmail.com>
Martin Möhrmann <moehrmann@google.com> <martisch@uos.de> Martin Möhrmann <martin@golang.org> <moehrmann@google.com> <martisch@uos.de>
Martin Neubauer <m.ne@gmx.net> Martin Neubauer <m.ne@gmx.net>
Martin Olsen <github.com@martinolsen.net> Martin Olsen <github.com@martinolsen.net>
Martin Olsson <martin@minimum.se> Martin Olsson <martin@minimum.se>
@ -1741,6 +1807,7 @@ Matthew Denton <mdenton@skyportsystems.com>
Matthew Holt <Matthew.Holt+git@gmail.com> Matthew Holt <Matthew.Holt+git@gmail.com>
Matthew Horsnell <matthew.horsnell@gmail.com> Matthew Horsnell <matthew.horsnell@gmail.com>
Matthew Waters <mwwaters@gmail.com> Matthew Waters <mwwaters@gmail.com>
Matthias Dötsch <matze@mdoetsch.de>
Matthias Frei <matthias.frei@inf.ethz.ch> Matthias Frei <matthias.frei@inf.ethz.ch>
Matthieu Hauglustaine <matt.hauglustaine@gmail.com> Matthieu Hauglustaine <matt.hauglustaine@gmail.com>
Matthieu Olivier <olivier.matthieu@gmail.com> Matthieu Olivier <olivier.matthieu@gmail.com>
@ -1814,6 +1881,7 @@ Michal Bohuslávek <mbohuslavek@gmail.com>
Michal Cierniak <cierniak@google.com> Michal Cierniak <cierniak@google.com>
Michał Derkacz <ziutek@lnet.pl> Michał Derkacz <ziutek@lnet.pl>
Michal Franc <lam.michal.franc@gmail.com> Michal Franc <lam.michal.franc@gmail.com>
Michal Hruby <michal@axiom.co>
Michał Łowicki <mlowicki@gmail.com> Michał Łowicki <mlowicki@gmail.com>
Michal Pristas <michal.pristas@gmail.com> Michal Pristas <michal.pristas@gmail.com>
Michal Rostecki <mrostecki@suse.de> Michal Rostecki <mrostecki@suse.de>
@ -1844,6 +1912,7 @@ Mike Solomon <msolo@gmail.com>
Mike Strosaker <strosake@us.ibm.com> Mike Strosaker <strosake@us.ibm.com>
Mike Tsao <mike@sowbug.com> Mike Tsao <mike@sowbug.com>
Mike Wiacek <mjwiacek@google.com> Mike Wiacek <mjwiacek@google.com>
Mikhail Faraponov <11322032+moredure@users.noreply.github.com>
Mikhail Fesenko <proggga@gmail.com> Mikhail Fesenko <proggga@gmail.com>
Mikhail Gusarov <dottedmag@dottedmag.net> Mikhail Gusarov <dottedmag@dottedmag.net>
Mikhail Panchenko <m@mihasya.com> Mikhail Panchenko <m@mihasya.com>
@ -1870,6 +1939,7 @@ Moritz Fain <moritz@fain.io>
Moriyoshi Koizumi <mozo@mozo.jp> Moriyoshi Koizumi <mozo@mozo.jp>
Morten Siebuhr <sbhr@sbhr.dk> Morten Siebuhr <sbhr@sbhr.dk>
Môshe van der Sterre <moshevds@gmail.com> Môshe van der Sterre <moshevds@gmail.com>
Mostafa Solati <mostafa.solati@gmail.com>
Mostyn Bramley-Moore <mostyn@antipode.se> Mostyn Bramley-Moore <mostyn@antipode.se>
Mrunal Patel <mrunalp@gmail.com> Mrunal Patel <mrunalp@gmail.com>
Muhammad Falak R Wani <falakreyaz@gmail.com> Muhammad Falak R Wani <falakreyaz@gmail.com>
@ -1927,6 +1997,7 @@ Nick Miyake <nmiyake@users.noreply.github.com>
Nick Patavalis <nick.patavalis@gmail.com> Nick Patavalis <nick.patavalis@gmail.com>
Nick Petroni <npetroni@cs.umd.edu> Nick Petroni <npetroni@cs.umd.edu>
Nick Robinson <nrobinson13@gmail.com> Nick Robinson <nrobinson13@gmail.com>
Nick Sherron <nsherron90@gmail.com>
Nick Smolin <nick27surgut@gmail.com> Nick Smolin <nick27surgut@gmail.com>
Nicolas BRULEZ <n.brulez@gmail.com> Nicolas BRULEZ <n.brulez@gmail.com>
Nicolas Kaiser <nikai@nikai.net> Nicolas Kaiser <nikai@nikai.net>
@ -1956,6 +2027,7 @@ Noah Santschi-Cooney <noah@santschi-cooney.ch>
Noble Johnson <noblepoly@gmail.com> Noble Johnson <noblepoly@gmail.com>
Nodir Turakulov <nodir@google.com> Nodir Turakulov <nodir@google.com>
Noel Georgi <git@frezbo.com> Noel Georgi <git@frezbo.com>
Nooras Saba <saba@golang.org>
Norberto Lopes <nlopes.ml@gmail.com> Norberto Lopes <nlopes.ml@gmail.com>
Norman B. Lancaster <qbradq@gmail.com> Norman B. Lancaster <qbradq@gmail.com>
Nuno Cruces <ncruces@users.noreply.github.com> Nuno Cruces <ncruces@users.noreply.github.com>
@ -1973,6 +2045,7 @@ Oliver Tan <otan@cockroachlabs.com>
Oliver Tonnhofer <olt@bogosoft.com> Oliver Tonnhofer <olt@bogosoft.com>
Olivier Antoine <olivier.antoine@gmail.com> Olivier Antoine <olivier.antoine@gmail.com>
Olivier Duperray <duperray.olivier@gmail.com> Olivier Duperray <duperray.olivier@gmail.com>
Olivier Mengué <olivier.mengue@gmail.com>
Olivier Poitrey <rs@dailymotion.com> Olivier Poitrey <rs@dailymotion.com>
Olivier Saingre <osaingre@gmail.com> Olivier Saingre <osaingre@gmail.com>
Olivier Wulveryck <olivier.wulveryck@gmail.com> Olivier Wulveryck <olivier.wulveryck@gmail.com>
@ -1982,6 +2055,7 @@ Ori Bernstein <ori@eigenstate.org>
Ori Rawlings <orirawlings@gmail.com> Ori Rawlings <orirawlings@gmail.com>
Oryan Moshe <iamoryanmoshe@gmail.com> Oryan Moshe <iamoryanmoshe@gmail.com>
Osamu TONOMORI <osamingo@gmail.com> Osamu TONOMORI <osamingo@gmail.com>
Oscar Söderlund <oscar.soderlund@einride.tech>
Özgür Kesim <oec-go@kesim.org> Özgür Kesim <oec-go@kesim.org>
Pablo Caderno <kaderno@gmail.com> Pablo Caderno <kaderno@gmail.com>
Pablo Lalloni <plalloni@gmail.com> Pablo Lalloni <plalloni@gmail.com>
@ -2014,6 +2088,7 @@ Patrick Pelletier <pp.pelletier@gmail.com>
Patrick Riley <pfr@google.com> Patrick Riley <pfr@google.com>
Patrick Smith <pat42smith@gmail.com> Patrick Smith <pat42smith@gmail.com>
Patrik Lundin <patrik@sigterm.se> Patrik Lundin <patrik@sigterm.se>
Patrik Nyblom <pnyb@google.com>
Paul A Querna <paul.querna@gmail.com> Paul A Querna <paul.querna@gmail.com>
Paul Borman <borman@google.com> Paul Borman <borman@google.com>
Paul Boyd <boyd.paul2@gmail.com> Paul Boyd <boyd.paul2@gmail.com>
@ -2042,6 +2117,7 @@ Paul Wankadia <junyer@google.com>
Paulo Casaretto <pcasaretto@gmail.com> Paulo Casaretto <pcasaretto@gmail.com>
Paulo Flabiano Smorigo <pfsmorigo@linux.vnet.ibm.com> Paulo Flabiano Smorigo <pfsmorigo@linux.vnet.ibm.com>
Paulo Gomes <paulo.gomes.uk@gmail.com> Paulo Gomes <paulo.gomes.uk@gmail.com>
Pavel Kositsyn <kositsyn.pa@phystech.edu>
Pavel Paulau <pavel.paulau@gmail.com> Pavel Paulau <pavel.paulau@gmail.com>
Pavel Watson <watsonpavel@gmail.com> Pavel Watson <watsonpavel@gmail.com>
Pavel Zinovkin <pavel.zinovkin@gmail.com> Pavel Zinovkin <pavel.zinovkin@gmail.com>
@ -2049,6 +2125,7 @@ Pavlo Sumkin <ymkins@gmail.com>
Pawel Knap <pawelknap88@gmail.com> Pawel Knap <pawelknap88@gmail.com>
Pawel Szczur <filemon@google.com> Pawel Szczur <filemon@google.com>
Paweł Szulik <pawel.szulik@intel.com> Paweł Szulik <pawel.szulik@intel.com>
Pedro Lopez Mareque <pedro.lopez.mareque@gmail.com>
Pei Xian Chee <luciolas1991@gmail.com> Pei Xian Chee <luciolas1991@gmail.com>
Pei-Ming Wu <p408865@gmail.com> Pei-Ming Wu <p408865@gmail.com>
Pen Tree <appletree2479@outlook.com> Pen Tree <appletree2479@outlook.com>
@ -2164,6 +2241,7 @@ Rhys Hiltner <rhys@justin.tv>
Ricardo Padilha <ricardospadilha@gmail.com> Ricardo Padilha <ricardospadilha@gmail.com>
Ricardo Pchevuzinske Katz <ricardo.katz@serpro.gov.br> Ricardo Pchevuzinske Katz <ricardo.katz@serpro.gov.br>
Ricardo Seriani <ricardo.seriani@gmail.com> Ricardo Seriani <ricardo.seriani@gmail.com>
Rich Hong <hong.rich@gmail.com>
Richard Barnes <rlb@ipv.sx> Richard Barnes <rlb@ipv.sx>
Richard Crowley <r@rcrowley.org> Richard Crowley <r@rcrowley.org>
Richard Dingwall <rdingwall@gmail.com> Richard Dingwall <rdingwall@gmail.com>
@ -2179,6 +2257,7 @@ Rick Hudson <rlh@golang.org>
Rick Sayre <whorfin@gmail.com> Rick Sayre <whorfin@gmail.com>
Rijnard van Tonder <rvantonder@gmail.com> Rijnard van Tonder <rvantonder@gmail.com>
Riku Voipio <riku.voipio@linaro.org> Riku Voipio <riku.voipio@linaro.org>
Riley Avron <ra.git@posteo.net>
Risto Jaakko Saarelma <rsaarelm@gmail.com> Risto Jaakko Saarelma <rsaarelm@gmail.com>
Rob Earhart <earhart@google.com> Rob Earhart <earhart@google.com>
Rob Findley <rfindley@google.com> Rob Findley <rfindley@google.com>
@ -2186,8 +2265,10 @@ Rob Norman <rob.norman@infinitycloud.com>
Rob Phoenix <rob@robphoenix.com> Rob Phoenix <rob@robphoenix.com>
Rob Pike <r@golang.org> Rob Pike <r@golang.org>
Robert Ayrapetyan <robert.ayrapetyan@gmail.com> Robert Ayrapetyan <robert.ayrapetyan@gmail.com>
Robert Burke <rebo@google.com>
Robert Daniel Kortschak <dan.kortschak@adelaide.edu.au> <dan@kortschak.io> Robert Daniel Kortschak <dan.kortschak@adelaide.edu.au> <dan@kortschak.io>
Robert Dinu <r@varp.se> Robert Dinu <r@varp.se>
Robert Engels <rengels@ix.netcom.com>
Robert Figueiredo <robfig@gmail.com> Robert Figueiredo <robfig@gmail.com>
Robert Griesemer <gri@golang.org> Robert Griesemer <gri@golang.org>
Robert Hencke <robert.hencke@gmail.com> Robert Hencke <robert.hencke@gmail.com>
@ -2212,6 +2293,7 @@ Roger Peppe <rogpeppe@gmail.com>
Rohan Challa <rohan@golang.org> Rohan Challa <rohan@golang.org>
Rohan Verma <rohanverma2004@gmail.com> Rohan Verma <rohanverma2004@gmail.com>
Rohith Ravi <entombedvirus@gmail.com> Rohith Ravi <entombedvirus@gmail.com>
Roi Martin <jroi.martin@gmail.com>
Roland Illig <roland.illig@gmx.de> Roland Illig <roland.illig@gmx.de>
Roland Shoemaker <rolandshoemaker@gmail.com> Roland Shoemaker <rolandshoemaker@gmail.com>
Romain Baugue <romain.baugue@elwinar.com> Romain Baugue <romain.baugue@elwinar.com>
@ -2242,6 +2324,7 @@ Ryan Canty <jrcanty@gmail.com>
Ryan Dahl <ry@tinyclouds.org> Ryan Dahl <ry@tinyclouds.org>
Ryan Hitchman <hitchmanr@gmail.com> Ryan Hitchman <hitchmanr@gmail.com>
Ryan Kohler <ryankohler@google.com> Ryan Kohler <ryankohler@google.com>
Ryan Leung <rleungx@gmail.com>
Ryan Lower <rpjlower@gmail.com> Ryan Lower <rpjlower@gmail.com>
Ryan Roden-Corrent <ryan@rcorre.net> Ryan Roden-Corrent <ryan@rcorre.net>
Ryan Seys <ryan@ryanseys.com> Ryan Seys <ryan@ryanseys.com>
@ -2275,6 +2358,7 @@ Sami Pönkänen <sami.ponkanen@gmail.com>
Samuel Kelemen <SCKelemen@users.noreply.github.com> Samuel Kelemen <SCKelemen@users.noreply.github.com>
Samuel Tan <samueltan@google.com> Samuel Tan <samueltan@google.com>
Samuele Pedroni <pedronis@lucediurna.net> Samuele Pedroni <pedronis@lucediurna.net>
San Ye <xyesan@gmail.com>
Sander van Harmelen <sander@vanharmelen.nl> Sander van Harmelen <sander@vanharmelen.nl>
Sanjay Menakuru <balasanjay@gmail.com> Sanjay Menakuru <balasanjay@gmail.com>
Santhosh Kumar Tekuri <santhosh.tekuri@gmail.com> Santhosh Kumar Tekuri <santhosh.tekuri@gmail.com>
@ -2339,6 +2423,7 @@ Shaba Abhiram <shabarivas.abhiram@gmail.com>
Shahar Kohanim <skohanim@gmail.com> Shahar Kohanim <skohanim@gmail.com>
Shailesh Suryawanshi <ss.shailesh28@gmail.com> Shailesh Suryawanshi <ss.shailesh28@gmail.com>
Shamil Garatuev <garatuev@gmail.com> Shamil Garatuev <garatuev@gmail.com>
Shamim Akhtar <shamim.rhce@gmail.com>
Shane Hansen <shanemhansen@gmail.com> Shane Hansen <shanemhansen@gmail.com>
Shang Jian Ding <sding3@ncsu.edu> Shang Jian Ding <sding3@ncsu.edu>
Shaozhen Ding <dsz0111@gmail.com> Shaozhen Ding <dsz0111@gmail.com>
@ -2375,6 +2460,7 @@ Simon Drake <simondrake1990@gmail.com>
Simon Ferquel <simon.ferquel@docker.com> Simon Ferquel <simon.ferquel@docker.com>
Simon Frei <freisim93@gmail.com> Simon Frei <freisim93@gmail.com>
Simon Jefford <simon.jefford@gmail.com> Simon Jefford <simon.jefford@gmail.com>
Simon Law <sfllaw@sfllaw.ca>
Simon Rawet <simon@rawet.se> Simon Rawet <simon@rawet.se>
Simon Rozman <simon@rozman.si> Simon Rozman <simon@rozman.si>
Simon Ser <contact@emersion.fr> Simon Ser <contact@emersion.fr>
@ -2440,6 +2526,7 @@ Suharsh Sivakumar <suharshs@google.com>
Sukrit Handa <sukrit.handa@utoronto.ca> Sukrit Handa <sukrit.handa@utoronto.ca>
Sunny <me@darkowlzz.space> Sunny <me@darkowlzz.space>
Suriyaa Sundararuban <suriyaasundararuban@gmail.com> Suriyaa Sundararuban <suriyaasundararuban@gmail.com>
Suvaditya Sur <suvaditya.sur@gmail.com>
Suyash <dextrous93@gmail.com> Suyash <dextrous93@gmail.com>
Suzy Mueller <suzmue@golang.org> Suzy Mueller <suzmue@golang.org>
Sven Almgren <sven@tras.se> Sven Almgren <sven@tras.se>
@ -2502,6 +2589,7 @@ Thomas Symborski <thomas.symborski@gmail.com>
Thomas Wanielista <tomwans@gmail.com> Thomas Wanielista <tomwans@gmail.com>
Thorben Krueger <thorben.krueger@gmail.com> Thorben Krueger <thorben.krueger@gmail.com>
Thordur Bjornsson <thorduri@secnorth.net> Thordur Bjornsson <thorduri@secnorth.net>
Tiago Peczenyj <tpeczenyj@weborama.com>
Tiago Queiroz <contato@tiago.eti.br> Tiago Queiroz <contato@tiago.eti.br>
Tianji Wu <the729@gmail.com> Tianji Wu <the729@gmail.com>
Tianon Gravi <admwiggin@gmail.com> Tianon Gravi <admwiggin@gmail.com>
@ -2636,6 +2724,7 @@ Vladimir Varankin <nek.narqo@gmail.com>
Vojtech Bocek <vbocek@gmail.com> Vojtech Bocek <vbocek@gmail.com>
Volker Dobler <dr.volker.dobler@gmail.com> Volker Dobler <dr.volker.dobler@gmail.com>
Volodymyr Paprotski <vpaprots@ca.ibm.com> Volodymyr Paprotski <vpaprots@ca.ibm.com>
Vyacheslav Pachkov <slava.pach@gmail.com>
W. Trevor King <wking@tremily.us> W. Trevor King <wking@tremily.us>
Wade Simmons <wade@wades.im> Wade Simmons <wade@wades.im>
Wagner Riffel <wgrriffel@gmail.com> Wagner Riffel <wgrriffel@gmail.com>
@ -2653,6 +2742,7 @@ Wei Guangjing <vcc.163@gmail.com>
Wei Xiao <wei.xiao@arm.com> Wei Xiao <wei.xiao@arm.com>
Wei Xikai <xykwei@gmail.com> Wei Xikai <xykwei@gmail.com>
Weichao Tang <tevic.tt@gmail.com> Weichao Tang <tevic.tt@gmail.com>
Weilu Jia <optix2000@gmail.com>
Weixie Cui <cuiweixie@gmail.com> <523516579@qq.com> Weixie Cui <cuiweixie@gmail.com> <523516579@qq.com>
Wembley G. Leach, Jr <wembley.gl@gmail.com> Wembley G. Leach, Jr <wembley.gl@gmail.com>
Wenlei (Frank) He <wlhe@google.com> Wenlei (Frank) He <wlhe@google.com>
@ -2722,9 +2812,11 @@ Yuichi Nishiwaki <yuichi.nishiwaki@gmail.com>
Yuji Yaginuma <yuuji.yaginuma@gmail.com> Yuji Yaginuma <yuuji.yaginuma@gmail.com>
Yuki Ito <mrno110y@gmail.com> Yuki Ito <mrno110y@gmail.com>
Yuki OKUSHI <huyuumi.dev@gmail.com> Yuki OKUSHI <huyuumi.dev@gmail.com>
Yuki Osaki <yuki.osaki7@gmail.com>
Yuki Yugui Sonoda <yugui@google.com> Yuki Yugui Sonoda <yugui@google.com>
Yukihiro Nishinaka <6elpinal@gmail.com> Yukihiro Nishinaka <6elpinal@gmail.com>
YunQiang Su <syq@debian.org> YunQiang Su <syq@debian.org>
Yuntao Wang <ytcoode@gmail.com>
Yury Smolsky <yury@smolsky.by> Yury Smolsky <yury@smolsky.by>
Yusuke Kagiwada <block.rxckin.beats@gmail.com> Yusuke Kagiwada <block.rxckin.beats@gmail.com>
Yuusei Kuwana <kuwana@kumama.org> Yuusei Kuwana <kuwana@kumama.org>
@ -2736,7 +2828,9 @@ Zach Gershman <zachgersh@gmail.com>
Zach Hoffman <zrhoffman@apache.org> Zach Hoffman <zrhoffman@apache.org>
Zach Jones <zachj1@gmail.com> Zach Jones <zachj1@gmail.com>
Zachary Amsden <zach@thundertoken.com> Zachary Amsden <zach@thundertoken.com>
Zachary Burkett <zburkett@splitcubestudios.com>
Zachary Gershman <zgershman@pivotal.io> Zachary Gershman <zgershman@pivotal.io>
Zaiyang Li <zaiyangli777@gmail.com>
Zak <zrjknill@gmail.com> Zak <zrjknill@gmail.com>
Zakatell Kanda <hi@zkanda.io> Zakatell Kanda <hi@zkanda.io>
Zellyn Hunter <zellyn@squareup.com> <zellyn@gmail.com> Zellyn Hunter <zellyn@squareup.com> <zellyn@gmail.com>
@ -2745,6 +2839,7 @@ Zhang Boyang <zhangboyang.id@gmail.com>
Zheng Dayu <davidzheng23@gmail.com> Zheng Dayu <davidzheng23@gmail.com>
Zheng Xu <zheng.xu@arm.com> Zheng Xu <zheng.xu@arm.com>
Zhengyu He <hzy@google.com> Zhengyu He <hzy@google.com>
Zhi Zheng <zhi.zheng052@gmail.com>
Zhongpeng Lin <zplin@uber.com> Zhongpeng Lin <zplin@uber.com>
Zhongtao Chen <chenzhongtao@126.com> Zhongtao Chen <chenzhongtao@126.com>
Zhongwei Yao <zhongwei.yao@arm.com> Zhongwei Yao <zhongwei.yao@arm.com>

View file

@ -165,8 +165,8 @@ pkg reflect, method (Value) FieldByIndexErr([]int) (Value, error)
pkg reflect, method (Value) SetIterKey(*MapIter) pkg reflect, method (Value) SetIterKey(*MapIter)
pkg reflect, method (Value) SetIterValue(*MapIter) pkg reflect, method (Value) SetIterValue(*MapIter)
pkg reflect, method (Value) UnsafePointer() unsafe.Pointer pkg reflect, method (Value) UnsafePointer() unsafe.Pointer
pkg runtime/debug, method (*BuildInfo) MarshalText() ([]uint8, error) pkg runtime/debug, func ParseBuildInfo(string) (*BuildInfo, error)
pkg runtime/debug, method (*BuildInfo) UnmarshalText([]uint8) error pkg runtime/debug, method (*BuildInfo) String() string
pkg runtime/debug, type BuildInfo struct, GoVersion string pkg runtime/debug, type BuildInfo struct, GoVersion string
pkg runtime/debug, type BuildInfo struct, Settings []BuildSetting pkg runtime/debug, type BuildInfo struct, Settings []BuildSetting
pkg runtime/debug, type BuildSetting struct pkg runtime/debug, type BuildSetting struct

View file

@ -0,0 +1,5 @@
pkg encoding/binary, type AppendByteOrder interface { AppendUint16, AppendUint32, AppendUint64, String }
pkg encoding/binary, type AppendByteOrder interface, AppendUint16([]uint8, uint16) []uint8
pkg encoding/binary, type AppendByteOrder interface, AppendUint32([]uint8, uint32) []uint8
pkg encoding/binary, type AppendByteOrder interface, AppendUint64([]uint8, uint64) []uint8
pkg encoding/binary, type AppendByteOrder interface, String() string

File diff suppressed because it is too large Load diff

61
doc/go1.19.html Normal file
View file

@ -0,0 +1,61 @@
<!--{
"Title": "Go 1.19 Release Notes",
"Path": "/doc/go1.19"
}-->
<!--
NOTE: In this document and others in this directory, the convention is to
set fixed-width phrases with non-fixed-width spaces, as in
<code>hello</code> <code>world</code>.
Do not send CLs removing the interior tags from such phrases.
-->
<style>
main ul li { margin: 0.5em 0; }
</style>
<h2 id="introduction">DRAFT RELEASE NOTES — Introduction to Go 1.19</h2>
<p>
<strong>
Go 1.19 is not yet released. These are work-in-progress
release notes. Go 1.19 is expected to be released in August 2022.
</strong>
</p>
<h2 id="language">Changes to the language</h2>
<p>
TODO: complete this section
</p>
<h2 id="ports">Ports</h2>
<p>
TODO: complete this section, or delete if not needed
</p>
<h2 id="tools">Tools</h2>
<p>
TODO: complete this section, or delete if not needed
</p>
<h3 id="go-command">Go command</h3>
<p>
TODO: complete this section, or delete if not needed
</p>
<h2 id="runtime">Runtime</h2>
<p>
TODO: complete this section, or delete if not needed
</p>
<h2 id="compiler">Compiler</h2>
<p>
TODO: complete this section, or delete if not needed
</p>
<h2 id="linker">Linker</h2>
<p>
TODO: complete this section, or delete if not needed
</p>
<h2 id="library">Core library</h2>
<p>
TODO: complete this section
</p>
<h3 id="minor_library_changes">Minor changes to the library</h3>
<p>
As always, there are various minor changes and updates to the library,
made with the Go 1 <a href="/doc/go1compat">promise of compatibility</a>
in mind.
</p>
<p>
TODO: complete this section
</p>

File diff suppressed because it is too large Load diff

View file

@ -63,7 +63,7 @@ func TestASAN(t *testing.T) {
// sanitizer library needs a // sanitizer library needs a
// symbolizer program and can't find it. // symbolizer program and can't find it.
const noSymbolizer = "external symbolizer" const noSymbolizer = "external symbolizer"
// Check if -asan option can correctly print where the error occured. // Check if -asan option can correctly print where the error occurred.
if tc.errorLocation != "" && if tc.errorLocation != "" &&
!strings.Contains(out, tc.errorLocation) && !strings.Contains(out, tc.errorLocation) &&
!strings.Contains(out, noSymbolizer) && !strings.Contains(out, noSymbolizer) &&

View file

@ -6,6 +6,7 @@ package reboot_test
import ( import (
"io" "io"
"io/fs"
"os" "os"
"path/filepath" "path/filepath"
"strings" "strings"
@ -26,10 +27,14 @@ func overlayDir(dstRoot, srcRoot string) error {
return err return err
} }
return filepath.Walk(srcRoot, func(srcPath string, info os.FileInfo, err error) error { return filepath.WalkDir(srcRoot, func(srcPath string, entry fs.DirEntry, err error) error {
if err != nil || srcPath == srcRoot { if err != nil || srcPath == srcRoot {
return err return err
} }
if filepath.Base(srcPath) == "testdata" {
// We're just building, so no need to copy those.
return fs.SkipDir
}
suffix := strings.TrimPrefix(srcPath, srcRoot) suffix := strings.TrimPrefix(srcPath, srcRoot)
for len(suffix) > 0 && suffix[0] == filepath.Separator { for len(suffix) > 0 && suffix[0] == filepath.Separator {
@ -37,6 +42,7 @@ func overlayDir(dstRoot, srcRoot string) error {
} }
dstPath := filepath.Join(dstRoot, suffix) dstPath := filepath.Join(dstRoot, suffix)
info, err := entry.Info()
perm := info.Mode() & os.ModePerm perm := info.Mode() & os.ModePerm
if info.Mode()&os.ModeSymlink != 0 { if info.Mode()&os.ModeSymlink != 0 {
info, err = os.Stat(srcPath) info, err = os.Stat(srcPath)
@ -46,14 +52,15 @@ func overlayDir(dstRoot, srcRoot string) error {
perm = info.Mode() & os.ModePerm perm = info.Mode() & os.ModePerm
} }
// Always copy directories (don't symlink them). // Always make copies of directories.
// If we add a file in the overlay, we don't want to add it in the original. // If we add a file in the overlay, we don't want to add it in the original.
if info.IsDir() { if info.IsDir() {
return os.MkdirAll(dstPath, perm|0200) return os.MkdirAll(dstPath, perm|0200)
} }
// If the OS supports symlinks, use them instead of copying bytes. // If we can use a hard link, do that instead of copying bytes.
if err := os.Symlink(srcPath, dstPath); err == nil { // Go builds don't like symlinks in some cases, such as go:embed.
if err := os.Link(srcPath, dstPath); err == nil {
return nil return nil
} }

View file

@ -12,6 +12,7 @@ import (
"path/filepath" "path/filepath"
"runtime" "runtime"
"testing" "testing"
"time"
) )
func TestRepeatBootstrap(t *testing.T) { func TestRepeatBootstrap(t *testing.T) {
@ -19,16 +20,14 @@ func TestRepeatBootstrap(t *testing.T) {
t.Skipf("skipping test that rebuilds the entire toolchain") t.Skipf("skipping test that rebuilds the entire toolchain")
} }
goroot, err := os.MkdirTemp("", "reboot-goroot") goroot := t.TempDir()
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(goroot)
gorootSrc := filepath.Join(goroot, "src") gorootSrc := filepath.Join(goroot, "src")
overlayStart := time.Now()
if err := overlayDir(gorootSrc, filepath.Join(runtime.GOROOT(), "src")); err != nil { if err := overlayDir(gorootSrc, filepath.Join(runtime.GOROOT(), "src")); err != nil {
t.Fatal(err) t.Fatal(err)
} }
t.Logf("GOROOT/src overlay set up in %s", time.Since(overlayStart))
if err := os.WriteFile(filepath.Join(goroot, "VERSION"), []byte(runtime.Version()), 0666); err != nil { if err := os.WriteFile(filepath.Join(goroot, "VERSION"), []byte(runtime.Version()), 0666); err != nil {
t.Fatal(err) t.Fatal(err)

View file

@ -95,11 +95,11 @@ type rune = int32
type any = interface{} type any = interface{}
// comparable is an interface that is implemented by all comparable types // comparable is an interface that is implemented by all comparable types
// (booleans, numbers, strings, pointers, channels, interfaces, // (booleans, numbers, strings, pointers, channels, arrays of comparable types,
// arrays of comparable types, structs whose fields are all comparable types). // structs whose fields are all comparable types).
// The comparable interface may only be used as a type parameter constraint, // The comparable interface may only be used as a type parameter constraint,
// not as the type of a variable. // not as the type of a variable.
type comparable comparable type comparable interface{ comparable }
// iota is a predeclared identifier representing the untyped integer ordinal // iota is a predeclared identifier representing the untyped integer ordinal
// number of the current const specification in a (usually parenthesized) // number of the current const specification in a (usually parenthesized)

View file

@ -372,6 +372,8 @@ func genSplit(s, sep []byte, sepSave, n int) [][]byte {
// n > 0: at most n subslices; the last subslice will be the unsplit remainder. // n > 0: at most n subslices; the last subslice will be the unsplit remainder.
// n == 0: the result is nil (zero subslices) // n == 0: the result is nil (zero subslices)
// n < 0: all subslices // n < 0: all subslices
//
// To split around the first instance of a separator, see Cut.
func SplitN(s, sep []byte, n int) [][]byte { return genSplit(s, sep, 0, n) } func SplitN(s, sep []byte, n int) [][]byte { return genSplit(s, sep, 0, n) }
// SplitAfterN slices s into subslices after each instance of sep and // SplitAfterN slices s into subslices after each instance of sep and
@ -389,6 +391,8 @@ func SplitAfterN(s, sep []byte, n int) [][]byte {
// the subslices between those separators. // the subslices between those separators.
// If sep is empty, Split splits after each UTF-8 sequence. // If sep is empty, Split splits after each UTF-8 sequence.
// It is equivalent to SplitN with a count of -1. // It is equivalent to SplitN with a count of -1.
//
// To split around the first instance of a separator, see Cut.
func Split(s, sep []byte) [][]byte { return genSplit(s, sep, 0, -1) } func Split(s, sep []byte) [][]byte { return genSplit(s, sep, 0, -1) }
// SplitAfter slices s into all subslices after each instance of sep and // SplitAfter slices s into all subslices after each instance of sep and

View file

@ -155,7 +155,7 @@ as follows:
1. Remember I and FP. 1. Remember I and FP.
1. If T has zero size, add T to the stack sequence S and return. 1. If T has zero size, add T to the stack sequence S and return.
1. Try to register-assign V. 1. Try to register-assign V.
1. If step 2 failed, reset I and FP to the values from step 1, add T 1. If step 3 failed, reset I and FP to the values from step 1, add T
to the stack sequence S, and assign V to this field in S. to the stack sequence S, and assign V to this field in S.
Register-assignment of a value V of underlying type T works as follows: Register-assignment of a value V of underlying type T works as follows:

View file

@ -62,8 +62,9 @@ func Compiling(pkgs []string) bool {
// at best instrumentation would cause infinite recursion. // at best instrumentation would cause infinite recursion.
var NoInstrumentPkgs = []string{ var NoInstrumentPkgs = []string{
"runtime/internal/atomic", "runtime/internal/atomic",
"runtime/internal/sys",
"runtime/internal/math", "runtime/internal/math",
"runtime/internal/sys",
"runtime/internal/syscall",
"runtime", "runtime",
"runtime/race", "runtime/race",
"runtime/msan", "runtime/msan",

View file

@ -39,7 +39,6 @@ type DebugFlags struct {
TypeAssert int `help:"print information about type assertion inlining"` TypeAssert int `help:"print information about type assertion inlining"`
TypecheckInl int `help:"eager typechecking of inline function bodies"` TypecheckInl int `help:"eager typechecking of inline function bodies"`
Unified int `help:"enable unified IR construction"` Unified int `help:"enable unified IR construction"`
UnifiedQuirks int `help:"enable unified IR construction's quirks mode"`
WB int `help:"print information about write barriers"` WB int `help:"print information about write barriers"`
ABIWrap int `help:"print information about ABI wrapper generation"` ABIWrap int `help:"print information about ABI wrapper generation"`
MayMoreStack string `help:"call named function before all stack growth checks"` MayMoreStack string `help:"call named function before all stack growth checks"`

View file

@ -55,7 +55,6 @@ type CmdFlags struct {
C CountFlag "help:\"disable printing of columns in error messages\"" C CountFlag "help:\"disable printing of columns in error messages\""
D string "help:\"set relative `path` for local imports\"" D string "help:\"set relative `path` for local imports\""
E CountFlag "help:\"debug symbol export\"" E CountFlag "help:\"debug symbol export\""
G CountFlag "help:\"accept generic code\""
I func(string) "help:\"add `directory` to import search path\"" I func(string) "help:\"add `directory` to import search path\""
K CountFlag "help:\"debug missing line numbers\"" K CountFlag "help:\"debug missing line numbers\""
L CountFlag "help:\"show full file names in error messages\"" L CountFlag "help:\"show full file names in error messages\""
@ -141,7 +140,6 @@ type CmdFlags struct {
// ParseFlags parses the command-line flags into Flag. // ParseFlags parses the command-line flags into Flag.
func ParseFlags() { func ParseFlags() {
Flag.G = 3
Flag.I = addImportDir Flag.I = addImportDir
Flag.LowerC = 1 Flag.LowerC = 1

View file

@ -238,6 +238,15 @@ func (e *escape) goDeferStmt(n *ir.GoDeferStmt) {
fn.SetWrapper(true) fn.SetWrapper(true)
fn.Nname.SetType(types.NewSignature(types.LocalPkg, nil, nil, nil, nil)) fn.Nname.SetType(types.NewSignature(types.LocalPkg, nil, nil, nil, nil))
fn.Body = []ir.Node{call} fn.Body = []ir.Node{call}
if call, ok := call.(*ir.CallExpr); ok && call.Op() == ir.OCALLFUNC {
// If the callee is a named function, link to the original callee.
x := call.X
if x.Op() == ir.ONAME && x.(*ir.Name).Class == ir.PFUNC {
fn.WrappedFunc = call.X.(*ir.Name).Func
} else if x.Op() == ir.OMETHEXPR && ir.MethodExprFunc(x).Nname != nil {
fn.WrappedFunc = ir.MethodExprName(x).Func
}
}
clo := fn.OClosure clo := fn.OClosure
if n.Op() == ir.OGO { if n.Op() == ir.OGO {

View file

@ -10,6 +10,7 @@ import (
"cmd/compile/internal/base" "cmd/compile/internal/base"
"cmd/compile/internal/ir" "cmd/compile/internal/ir"
"cmd/compile/internal/logopt" "cmd/compile/internal/logopt"
"cmd/compile/internal/typecheck"
"cmd/compile/internal/types" "cmd/compile/internal/types"
) )
@ -243,6 +244,9 @@ func (b *batch) flowClosure(k hole, clo *ir.ClosureExpr) {
n.SetByval(!loc.addrtaken && !loc.reassigned && n.Type().Size() <= 128) n.SetByval(!loc.addrtaken && !loc.reassigned && n.Type().Size() <= 128)
if !n.Byval() { if !n.Byval() {
n.SetAddrtaken(true) n.SetAddrtaken(true)
if n.Sym().Name == typecheck.LocalDictName {
base.FatalfAt(n.Pos(), "dictionary variable not captured by value")
}
} }
if base.Flag.LowerM > 1 { if base.Flag.LowerM > 1 {

View file

@ -32,7 +32,6 @@ import (
"log" "log"
"os" "os"
"runtime" "runtime"
"sort"
) )
// handlePanic ensures that we print out an "internal compiler error" for any panic // handlePanic ensures that we print out an "internal compiler error" for any panic
@ -205,17 +204,6 @@ func Main(archInit func(*ssagen.ArchInfo)) {
// removal can skew the results (e.g., #43444). // removal can skew the results (e.g., #43444).
pkginit.MakeInit() pkginit.MakeInit()
// Stability quirk: sort top-level declarations, so we're not
// sensitive to the order that functions are added. In particular,
// the order that noder+typecheck add function closures is very
// subtle, and not important to reproduce.
if base.Debug.UnifiedQuirks != 0 {
s := typecheck.Target.Decls
sort.SliceStable(s, func(i, j int) bool {
return s[i].Pos().Before(s[j].Pos())
})
}
// Eliminate some obviously dead code. // Eliminate some obviously dead code.
// Must happen after typechecking. // Must happen after typechecking.
for _, n := range typecheck.Target.Decls { for _, n := range typecheck.Target.Decls {

View file

@ -217,6 +217,10 @@ func dumpGlobalConst(n ir.Node) {
if ir.ConstOverflow(v, t) { if ir.ConstOverflow(v, t) {
return return
} }
} else {
// If the type of the constant is an instantiated generic, we need to emit
// that type so the linker knows about it. See issue 51245.
_ = reflectdata.TypeLinksym(t)
} }
base.Ctxt.DwarfIntConst(base.Ctxt.Pkgpath, n.Sym().Name, types.TypeSymName(t), ir.IntVal(t, v)) base.Ctxt.DwarfIntConst(base.Ctxt.Pkgpath, n.Sym().Name, types.TypeSymName(t), ir.IntVal(t, v))
} }
@ -263,6 +267,10 @@ func addGCLocals() {
objw.Global(x, int32(len(x.P)), obj.RODATA|obj.DUPOK) objw.Global(x, int32(len(x.P)), obj.RODATA|obj.DUPOK)
x.Set(obj.AttrStatic, true) x.Set(obj.AttrStatic, true)
} }
if x := fn.WrapInfo; x != nil && !x.OnList() {
objw.Global(x, int32(len(x.P)), obj.RODATA|obj.DUPOK)
x.Set(obj.AttrStatic, true)
}
} }
} }

View file

@ -180,6 +180,14 @@ func ImportData(imports map[string]*types2.Package, data, path string) (pkg *typ
p.doDecl(localpkg, name) p.doDecl(localpkg, name)
} }
// SetConstraint can't be called if the constraint type is not yet complete.
// When type params are created in the 'P' case of (*importReader).obj(),
// the associated constraint type may not be complete due to recursion.
// Therefore, we defer calling SetConstraint there, and call it here instead
// after all types are complete.
for _, d := range p.later {
d.t.SetConstraint(d.constraint)
}
// record all referenced packages as imports // record all referenced packages as imports
list := append(([]*types2.Package)(nil), pkgList[1:]...) list := append(([]*types2.Package)(nil), pkgList[1:]...)
sort.Sort(byPath(list)) sort.Sort(byPath(list))
@ -191,6 +199,11 @@ func ImportData(imports map[string]*types2.Package, data, path string) (pkg *typ
return localpkg, nil return localpkg, nil
} }
type setConstraintArgs struct {
t *types2.TypeParam
constraint types2.Type
}
type iimporter struct { type iimporter struct {
exportVersion int64 exportVersion int64
ipath string ipath string
@ -206,6 +219,9 @@ type iimporter struct {
tparamIndex map[ident]*types2.TypeParam tparamIndex map[ident]*types2.TypeParam
interfaceList []*types2.Interface interfaceList []*types2.Interface
// Arguments for calls to SetConstraint that are deferred due to recursive types
later []setConstraintArgs
} }
func (p *iimporter) doDecl(pkg *types2.Package, name string) { func (p *iimporter) doDecl(pkg *types2.Package, name string) {
@ -401,7 +417,11 @@ func (r *importReader) obj(name string) {
} }
iface.MarkImplicit() iface.MarkImplicit()
} }
t.SetConstraint(constraint) // The constraint type may not be complete, if we
// are in the middle of a type recursion involving type
// constraints. So, we defer SetConstraint until we have
// completely set up all types in ImportData.
r.p.later = append(r.p.later, setConstraintArgs{t: t, constraint: constraint})
case 'V': case 'V':
typ := r.typ() typ := r.typ()

View file

@ -7,12 +7,17 @@
package importer package importer
import ( import (
"cmd/compile/internal/base"
"cmd/compile/internal/types2" "cmd/compile/internal/types2"
"fmt" "fmt"
"go/token" "go/token"
"sync" "sync"
) )
func assert(p bool) {
base.Assert(p)
}
func errorf(format string, args ...interface{}) { func errorf(format string, args ...interface{}) {
panic(fmt.Sprintf(format, args...)) panic(fmt.Sprintf(format, args...))
} }
@ -132,3 +137,13 @@ type anyType struct{}
func (t anyType) Underlying() types2.Type { return t } func (t anyType) Underlying() types2.Type { return t }
func (t anyType) String() string { return "any" } func (t anyType) String() string { return "any" }
type derivedInfo struct {
idx int
needed bool
}
type typeInfo struct {
idx int
derived bool
}

View file

@ -4,17 +4,18 @@
// Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
package noder package importer
import ( import (
"cmd/compile/internal/base" "cmd/compile/internal/base"
"cmd/compile/internal/syntax" "cmd/compile/internal/syntax"
"cmd/compile/internal/types2" "cmd/compile/internal/types2"
"cmd/internal/src" "cmd/internal/src"
"internal/pkgbits"
) )
type pkgReader2 struct { type pkgReader struct {
pkgDecoder pkgbits.PkgDecoder
ctxt *types2.Context ctxt *types2.Context
imports map[string]*types2.Package imports map[string]*types2.Package
@ -24,46 +25,46 @@ type pkgReader2 struct {
typs []types2.Type typs []types2.Type
} }
func readPackage2(ctxt *types2.Context, imports map[string]*types2.Package, input pkgDecoder) *types2.Package { func ReadPackage(ctxt *types2.Context, imports map[string]*types2.Package, input pkgbits.PkgDecoder) *types2.Package {
pr := pkgReader2{ pr := pkgReader{
pkgDecoder: input, PkgDecoder: input,
ctxt: ctxt, ctxt: ctxt,
imports: imports, imports: imports,
posBases: make([]*syntax.PosBase, input.numElems(relocPosBase)), posBases: make([]*syntax.PosBase, input.NumElems(pkgbits.RelocPosBase)),
pkgs: make([]*types2.Package, input.numElems(relocPkg)), pkgs: make([]*types2.Package, input.NumElems(pkgbits.RelocPkg)),
typs: make([]types2.Type, input.numElems(relocType)), typs: make([]types2.Type, input.NumElems(pkgbits.RelocType)),
} }
r := pr.newReader(relocMeta, publicRootIdx, syncPublic) r := pr.newReader(pkgbits.RelocMeta, pkgbits.PublicRootIdx, pkgbits.SyncPublic)
pkg := r.pkg() pkg := r.pkg()
r.bool() // has init r.Bool() // has init
for i, n := 0, r.len(); i < n; i++ { for i, n := 0, r.Len(); i < n; i++ {
// As if r.obj(), but avoiding the Scope.Lookup call, // As if r.obj(), but avoiding the Scope.Lookup call,
// to avoid eager loading of imports. // to avoid eager loading of imports.
r.sync(syncObject) r.Sync(pkgbits.SyncObject)
assert(!r.bool()) assert(!r.Bool())
r.p.objIdx(r.reloc(relocObj)) r.p.objIdx(r.Reloc(pkgbits.RelocObj))
assert(r.len() == 0) assert(r.Len() == 0)
} }
r.sync(syncEOF) r.Sync(pkgbits.SyncEOF)
pkg.MarkComplete() pkg.MarkComplete()
return pkg return pkg
} }
type reader2 struct { type reader struct {
decoder pkgbits.Decoder
p *pkgReader2 p *pkgReader
dict *reader2Dict dict *readerDict
} }
type reader2Dict struct { type readerDict struct {
bounds []typeInfo bounds []typeInfo
tparams []*types2.TypeParam tparams []*types2.TypeParam
@ -72,53 +73,53 @@ type reader2Dict struct {
derivedTypes []types2.Type derivedTypes []types2.Type
} }
type reader2TypeBound struct { type readerTypeBound struct {
derived bool derived bool
boundIdx int boundIdx int
} }
func (pr *pkgReader2) newReader(k reloc, idx int, marker syncMarker) *reader2 { func (pr *pkgReader) newReader(k pkgbits.RelocKind, idx int, marker pkgbits.SyncMarker) *reader {
return &reader2{ return &reader{
decoder: pr.newDecoder(k, idx, marker), Decoder: pr.NewDecoder(k, idx, marker),
p: pr, p: pr,
} }
} }
// @@@ Positions // @@@ Positions
func (r *reader2) pos() syntax.Pos { func (r *reader) pos() syntax.Pos {
r.sync(syncPos) r.Sync(pkgbits.SyncPos)
if !r.bool() { if !r.Bool() {
return syntax.Pos{} return syntax.Pos{}
} }
// TODO(mdempsky): Delta encoding. // TODO(mdempsky): Delta encoding.
posBase := r.posBase() posBase := r.posBase()
line := r.uint() line := r.Uint()
col := r.uint() col := r.Uint()
return syntax.MakePos(posBase, line, col) return syntax.MakePos(posBase, line, col)
} }
func (r *reader2) posBase() *syntax.PosBase { func (r *reader) posBase() *syntax.PosBase {
return r.p.posBaseIdx(r.reloc(relocPosBase)) return r.p.posBaseIdx(r.Reloc(pkgbits.RelocPosBase))
} }
func (pr *pkgReader2) posBaseIdx(idx int) *syntax.PosBase { func (pr *pkgReader) posBaseIdx(idx int) *syntax.PosBase {
if b := pr.posBases[idx]; b != nil { if b := pr.posBases[idx]; b != nil {
return b return b
} }
r := pr.newReader(relocPosBase, idx, syncPosBase) r := pr.newReader(pkgbits.RelocPosBase, idx, pkgbits.SyncPosBase)
var b *syntax.PosBase var b *syntax.PosBase
filename := r.string() filename := r.String()
if r.bool() { if r.Bool() {
b = syntax.NewTrimmedFileBase(filename, true) b = syntax.NewTrimmedFileBase(filename, true)
} else { } else {
pos := r.pos() pos := r.pos()
line := r.uint() line := r.Uint()
col := r.uint() col := r.Uint()
b = syntax.NewLineBase(pos, filename, true, line, col) b = syntax.NewLineBase(pos, filename, true, line, col)
} }
@ -128,45 +129,45 @@ func (pr *pkgReader2) posBaseIdx(idx int) *syntax.PosBase {
// @@@ Packages // @@@ Packages
func (r *reader2) pkg() *types2.Package { func (r *reader) pkg() *types2.Package {
r.sync(syncPkg) r.Sync(pkgbits.SyncPkg)
return r.p.pkgIdx(r.reloc(relocPkg)) return r.p.pkgIdx(r.Reloc(pkgbits.RelocPkg))
} }
func (pr *pkgReader2) pkgIdx(idx int) *types2.Package { func (pr *pkgReader) pkgIdx(idx int) *types2.Package {
// TODO(mdempsky): Consider using some non-nil pointer to indicate // TODO(mdempsky): Consider using some non-nil pointer to indicate
// the universe scope, so we don't need to keep re-reading it. // the universe scope, so we don't need to keep re-reading it.
if pkg := pr.pkgs[idx]; pkg != nil { if pkg := pr.pkgs[idx]; pkg != nil {
return pkg return pkg
} }
pkg := pr.newReader(relocPkg, idx, syncPkgDef).doPkg() pkg := pr.newReader(pkgbits.RelocPkg, idx, pkgbits.SyncPkgDef).doPkg()
pr.pkgs[idx] = pkg pr.pkgs[idx] = pkg
return pkg return pkg
} }
func (r *reader2) doPkg() *types2.Package { func (r *reader) doPkg() *types2.Package {
path := r.string() path := r.String()
if path == "builtin" { if path == "builtin" {
return nil // universe return nil // universe
} }
if path == "" { if path == "" {
path = r.p.pkgPath path = r.p.PkgPath()
} }
if pkg := r.p.imports[path]; pkg != nil { if pkg := r.p.imports[path]; pkg != nil {
return pkg return pkg
} }
name := r.string() name := r.String()
height := r.len() height := r.Len()
pkg := types2.NewPackageHeight(path, name, height) pkg := types2.NewPackageHeight(path, name, height)
r.p.imports[path] = pkg r.p.imports[path] = pkg
// TODO(mdempsky): The list of imported packages is important for // TODO(mdempsky): The list of imported packages is important for
// go/types, but we could probably skip populating it for types2. // go/types, but we could probably skip populating it for types2.
imports := make([]*types2.Package, r.len()) imports := make([]*types2.Package, r.Len())
for i := range imports { for i := range imports {
imports[i] = r.pkg() imports[i] = r.pkg()
} }
@ -177,19 +178,19 @@ func (r *reader2) doPkg() *types2.Package {
// @@@ Types // @@@ Types
func (r *reader2) typ() types2.Type { func (r *reader) typ() types2.Type {
return r.p.typIdx(r.typInfo(), r.dict) return r.p.typIdx(r.typInfo(), r.dict)
} }
func (r *reader2) typInfo() typeInfo { func (r *reader) typInfo() typeInfo {
r.sync(syncType) r.Sync(pkgbits.SyncType)
if r.bool() { if r.Bool() {
return typeInfo{idx: r.len(), derived: true} return typeInfo{idx: r.Len(), derived: true}
} }
return typeInfo{idx: r.reloc(relocType), derived: false} return typeInfo{idx: r.Reloc(pkgbits.RelocType), derived: false}
} }
func (pr *pkgReader2) typIdx(info typeInfo, dict *reader2Dict) types2.Type { func (pr *pkgReader) typIdx(info typeInfo, dict *readerDict) types2.Type {
idx := info.idx idx := info.idx
var where *types2.Type var where *types2.Type
if info.derived { if info.derived {
@ -203,7 +204,7 @@ func (pr *pkgReader2) typIdx(info typeInfo, dict *reader2Dict) types2.Type {
return typ return typ
} }
r := pr.newReader(relocType, idx, syncTypeIdx) r := pr.newReader(pkgbits.RelocType, idx, pkgbits.SyncTypeIdx)
r.dict = dict r.dict = dict
typ := r.doTyp() typ := r.doTyp()
@ -218,16 +219,16 @@ func (pr *pkgReader2) typIdx(info typeInfo, dict *reader2Dict) types2.Type {
return typ return typ
} }
func (r *reader2) doTyp() (res types2.Type) { func (r *reader) doTyp() (res types2.Type) {
switch tag := codeType(r.code(syncType)); tag { switch tag := pkgbits.CodeType(r.Code(pkgbits.SyncType)); tag {
default: default:
base.FatalfAt(src.NoXPos, "unhandled type tag: %v", tag) base.FatalfAt(src.NoXPos, "unhandled type tag: %v", tag)
panic("unreachable") panic("unreachable")
case typeBasic: case pkgbits.TypeBasic:
return types2.Typ[r.len()] return types2.Typ[r.Len()]
case typeNamed: case pkgbits.TypeNamed:
obj, targs := r.obj() obj, targs := r.obj()
name := obj.(*types2.TypeName) name := obj.(*types2.TypeName)
if len(targs) != 0 { if len(targs) != 0 {
@ -236,41 +237,41 @@ func (r *reader2) doTyp() (res types2.Type) {
} }
return name.Type() return name.Type()
case typeTypeParam: case pkgbits.TypeTypeParam:
return r.dict.tparams[r.len()] return r.dict.tparams[r.Len()]
case typeArray: case pkgbits.TypeArray:
len := int64(r.uint64()) len := int64(r.Uint64())
return types2.NewArray(r.typ(), len) return types2.NewArray(r.typ(), len)
case typeChan: case pkgbits.TypeChan:
dir := types2.ChanDir(r.len()) dir := types2.ChanDir(r.Len())
return types2.NewChan(dir, r.typ()) return types2.NewChan(dir, r.typ())
case typeMap: case pkgbits.TypeMap:
return types2.NewMap(r.typ(), r.typ()) return types2.NewMap(r.typ(), r.typ())
case typePointer: case pkgbits.TypePointer:
return types2.NewPointer(r.typ()) return types2.NewPointer(r.typ())
case typeSignature: case pkgbits.TypeSignature:
return r.signature(nil, nil, nil) return r.signature(nil, nil, nil)
case typeSlice: case pkgbits.TypeSlice:
return types2.NewSlice(r.typ()) return types2.NewSlice(r.typ())
case typeStruct: case pkgbits.TypeStruct:
return r.structType() return r.structType()
case typeInterface: case pkgbits.TypeInterface:
return r.interfaceType() return r.interfaceType()
case typeUnion: case pkgbits.TypeUnion:
return r.unionType() return r.unionType()
} }
} }
func (r *reader2) structType() *types2.Struct { func (r *reader) structType() *types2.Struct {
fields := make([]*types2.Var, r.len()) fields := make([]*types2.Var, r.Len())
var tags []string var tags []string
for i := range fields { for i := range fields {
pos := r.pos() pos := r.pos()
pkg, name := r.selector() pkg, name := r.selector()
ftyp := r.typ() ftyp := r.typ()
tag := r.string() tag := r.String()
embedded := r.bool() embedded := r.Bool()
fields[i] = types2.NewField(pos, pkg, name, ftyp, embedded) fields[i] = types2.NewField(pos, pkg, name, ftyp, embedded)
if tag != "" { if tag != "" {
@ -283,17 +284,18 @@ func (r *reader2) structType() *types2.Struct {
return types2.NewStruct(fields, tags) return types2.NewStruct(fields, tags)
} }
func (r *reader2) unionType() *types2.Union { func (r *reader) unionType() *types2.Union {
terms := make([]*types2.Term, r.len()) terms := make([]*types2.Term, r.Len())
for i := range terms { for i := range terms {
terms[i] = types2.NewTerm(r.bool(), r.typ()) terms[i] = types2.NewTerm(r.Bool(), r.typ())
} }
return types2.NewUnion(terms) return types2.NewUnion(terms)
} }
func (r *reader2) interfaceType() *types2.Interface { func (r *reader) interfaceType() *types2.Interface {
methods := make([]*types2.Func, r.len()) methods := make([]*types2.Func, r.Len())
embeddeds := make([]types2.Type, r.len()) embeddeds := make([]types2.Type, r.Len())
implicit := len(methods) == 0 && len(embeddeds) == 1 && r.Bool()
for i := range methods { for i := range methods {
pos := r.pos() pos := r.pos()
@ -306,30 +308,34 @@ func (r *reader2) interfaceType() *types2.Interface {
embeddeds[i] = r.typ() embeddeds[i] = r.typ()
} }
return types2.NewInterfaceType(methods, embeddeds) iface := types2.NewInterfaceType(methods, embeddeds)
if implicit {
iface.MarkImplicit()
}
return iface
} }
func (r *reader2) signature(recv *types2.Var, rtparams, tparams []*types2.TypeParam) *types2.Signature { func (r *reader) signature(recv *types2.Var, rtparams, tparams []*types2.TypeParam) *types2.Signature {
r.sync(syncSignature) r.Sync(pkgbits.SyncSignature)
params := r.params() params := r.params()
results := r.params() results := r.params()
variadic := r.bool() variadic := r.Bool()
return types2.NewSignatureType(recv, rtparams, tparams, params, results, variadic) return types2.NewSignatureType(recv, rtparams, tparams, params, results, variadic)
} }
func (r *reader2) params() *types2.Tuple { func (r *reader) params() *types2.Tuple {
r.sync(syncParams) r.Sync(pkgbits.SyncParams)
params := make([]*types2.Var, r.len()) params := make([]*types2.Var, r.Len())
for i := range params { for i := range params {
params[i] = r.param() params[i] = r.param()
} }
return types2.NewTuple(params...) return types2.NewTuple(params...)
} }
func (r *reader2) param() *types2.Var { func (r *reader) param() *types2.Var {
r.sync(syncParam) r.Sync(pkgbits.SyncParam)
pos := r.pos() pos := r.pos()
pkg, name := r.localIdent() pkg, name := r.localIdent()
@ -340,15 +346,15 @@ func (r *reader2) param() *types2.Var {
// @@@ Objects // @@@ Objects
func (r *reader2) obj() (types2.Object, []types2.Type) { func (r *reader) obj() (types2.Object, []types2.Type) {
r.sync(syncObject) r.Sync(pkgbits.SyncObject)
assert(!r.bool()) assert(!r.Bool())
pkg, name := r.p.objIdx(r.reloc(relocObj)) pkg, name := r.p.objIdx(r.Reloc(pkgbits.RelocObj))
obj := pkg.Scope().Lookup(name) obj := pkg.Scope().Lookup(name)
targs := make([]types2.Type, r.len()) targs := make([]types2.Type, r.Len())
for i := range targs { for i := range targs {
targs[i] = r.typ() targs[i] = r.typ()
} }
@ -356,47 +362,47 @@ func (r *reader2) obj() (types2.Object, []types2.Type) {
return obj, targs return obj, targs
} }
func (pr *pkgReader2) objIdx(idx int) (*types2.Package, string) { func (pr *pkgReader) objIdx(idx int) (*types2.Package, string) {
rname := pr.newReader(relocName, idx, syncObject1) rname := pr.newReader(pkgbits.RelocName, idx, pkgbits.SyncObject1)
objPkg, objName := rname.qualifiedIdent() objPkg, objName := rname.qualifiedIdent()
assert(objName != "") assert(objName != "")
tag := codeObj(rname.code(syncCodeObj)) tag := pkgbits.CodeObj(rname.Code(pkgbits.SyncCodeObj))
if tag == objStub { if tag == pkgbits.ObjStub {
assert(objPkg == nil || objPkg == types2.Unsafe) assert(objPkg == nil || objPkg == types2.Unsafe)
return objPkg, objName return objPkg, objName
} }
objPkg.Scope().InsertLazy(objName, func() types2.Object {
dict := pr.objDictIdx(idx) dict := pr.objDictIdx(idx)
r := pr.newReader(relocObj, idx, syncObject1) r := pr.newReader(pkgbits.RelocObj, idx, pkgbits.SyncObject1)
r.dict = dict r.dict = dict
objPkg.Scope().InsertLazy(objName, func() types2.Object {
switch tag { switch tag {
default: default:
panic("weird") panic("weird")
case objAlias: case pkgbits.ObjAlias:
pos := r.pos() pos := r.pos()
typ := r.typ() typ := r.typ()
return types2.NewTypeName(pos, objPkg, objName, typ) return types2.NewTypeName(pos, objPkg, objName, typ)
case objConst: case pkgbits.ObjConst:
pos := r.pos() pos := r.pos()
typ := r.typ() typ := r.typ()
val := r.value() val := r.Value()
return types2.NewConst(pos, objPkg, objName, typ, val) return types2.NewConst(pos, objPkg, objName, typ, val)
case objFunc: case pkgbits.ObjFunc:
pos := r.pos() pos := r.pos()
tparams := r.typeParamNames() tparams := r.typeParamNames()
sig := r.signature(nil, nil, tparams) sig := r.signature(nil, nil, tparams)
return types2.NewFunc(pos, objPkg, objName, sig) return types2.NewFunc(pos, objPkg, objName, sig)
case objType: case pkgbits.ObjType:
pos := r.pos() pos := r.pos()
return types2.NewTypeNameLazy(pos, objPkg, objName, func(named *types2.Named) (tparams []*types2.TypeParam, underlying types2.Type, methods []*types2.Func) { return types2.NewTypeNameLazy(pos, objPkg, objName, func(named *types2.Named) (tparams []*types2.TypeParam, underlying types2.Type, methods []*types2.Func) {
@ -408,7 +414,7 @@ func (pr *pkgReader2) objIdx(idx int) (*types2.Package, string) {
// about it, so maybe we can avoid worrying about that here. // about it, so maybe we can avoid worrying about that here.
underlying = r.typ().Underlying() underlying = r.typ().Underlying()
methods = make([]*types2.Func, r.len()) methods = make([]*types2.Func, r.Len())
for i := range methods { for i := range methods {
methods[i] = r.method() methods[i] = r.method()
} }
@ -416,7 +422,7 @@ func (pr *pkgReader2) objIdx(idx int) (*types2.Package, string) {
return return
}) })
case objVar: case pkgbits.ObjVar:
pos := r.pos() pos := r.pos()
typ := r.typ() typ := r.typ()
return types2.NewVar(pos, objPkg, objName, typ) return types2.NewVar(pos, objPkg, objName, typ)
@ -426,37 +432,37 @@ func (pr *pkgReader2) objIdx(idx int) (*types2.Package, string) {
return objPkg, objName return objPkg, objName
} }
func (pr *pkgReader2) objDictIdx(idx int) *reader2Dict { func (pr *pkgReader) objDictIdx(idx int) *readerDict {
r := pr.newReader(relocObjDict, idx, syncObject1) r := pr.newReader(pkgbits.RelocObjDict, idx, pkgbits.SyncObject1)
var dict reader2Dict var dict readerDict
if implicits := r.len(); implicits != 0 { if implicits := r.Len(); implicits != 0 {
base.Fatalf("unexpected object with %v implicit type parameter(s)", implicits) base.Fatalf("unexpected object with %v implicit type parameter(s)", implicits)
} }
dict.bounds = make([]typeInfo, r.len()) dict.bounds = make([]typeInfo, r.Len())
for i := range dict.bounds { for i := range dict.bounds {
dict.bounds[i] = r.typInfo() dict.bounds[i] = r.typInfo()
} }
dict.derived = make([]derivedInfo, r.len()) dict.derived = make([]derivedInfo, r.Len())
dict.derivedTypes = make([]types2.Type, len(dict.derived)) dict.derivedTypes = make([]types2.Type, len(dict.derived))
for i := range dict.derived { for i := range dict.derived {
dict.derived[i] = derivedInfo{r.reloc(relocType), r.bool()} dict.derived[i] = derivedInfo{r.Reloc(pkgbits.RelocType), r.Bool()}
} }
// function references follow, but reader2 doesn't need those // function references follow, but reader doesn't need those
return &dict return &dict
} }
func (r *reader2) typeParamNames() []*types2.TypeParam { func (r *reader) typeParamNames() []*types2.TypeParam {
r.sync(syncTypeParamNames) r.Sync(pkgbits.SyncTypeParamNames)
// Note: This code assumes it only processes objects without // Note: This code assumes it only processes objects without
// implement type parameters. This is currently fine, because // implement type parameters. This is currently fine, because
// reader2 is only used to read in exported declarations, which are // reader is only used to read in exported declarations, which are
// always package scoped. // always package scoped.
if len(r.dict.bounds) == 0 { if len(r.dict.bounds) == 0 {
@ -484,8 +490,8 @@ func (r *reader2) typeParamNames() []*types2.TypeParam {
return r.dict.tparams return r.dict.tparams
} }
func (r *reader2) method() *types2.Func { func (r *reader) method() *types2.Func {
r.sync(syncMethod) r.Sync(pkgbits.SyncMethod)
pos := r.pos() pos := r.pos()
pkg, name := r.selector() pkg, name := r.selector()
@ -496,11 +502,11 @@ func (r *reader2) method() *types2.Func {
return types2.NewFunc(pos, pkg, name, sig) return types2.NewFunc(pos, pkg, name, sig)
} }
func (r *reader2) qualifiedIdent() (*types2.Package, string) { return r.ident(syncSym) } func (r *reader) qualifiedIdent() (*types2.Package, string) { return r.ident(pkgbits.SyncSym) }
func (r *reader2) localIdent() (*types2.Package, string) { return r.ident(syncLocalIdent) } func (r *reader) localIdent() (*types2.Package, string) { return r.ident(pkgbits.SyncLocalIdent) }
func (r *reader2) selector() (*types2.Package, string) { return r.ident(syncSelector) } func (r *reader) selector() (*types2.Package, string) { return r.ident(pkgbits.SyncSelector) }
func (r *reader2) ident(marker syncMarker) (*types2.Package, string) { func (r *reader) ident(marker pkgbits.SyncMarker) (*types2.Package, string) {
r.sync(marker) r.Sync(marker)
return r.pkg(), r.string() return r.pkg(), r.String()
} }

View file

@ -79,7 +79,7 @@ func DeepCopy(pos src.XPos, n Node) Node {
var edit func(Node) Node var edit func(Node) Node
edit = func(x Node) Node { edit = func(x Node) Node {
switch x.Op() { switch x.Op() {
case OPACK, ONAME, ONONAME, OLITERAL, ONIL, OTYPE: case ONAME, ONONAME, OLITERAL, ONIL, OTYPE:
return x return x
} }
x = Copy(x) x = Copy(x)

View file

@ -202,7 +202,10 @@ type CompLitExpr struct {
Ntype Ntype Ntype Ntype
List Nodes // initialized values List Nodes // initialized values
Prealloc *Name Prealloc *Name
Len int64 // backing array length for OSLICELIT // For OSLICELIT, Len is the backing array length.
// For OMAPLIT, Len is the number of entries that we've removed from List and
// generated explicit mapassign calls for. This is used to inform the map alloc hint.
Len int64
} }
func NewCompLitExpr(pos src.XPos, op Op, typ Ntype, list []Node) *CompLitExpr { func NewCompLitExpr(pos src.XPos, op Op, typ Ntype, list []Node) *CompLitExpr {

View file

@ -202,7 +202,6 @@ var OpPrec = []int{
ONIL: 8, ONIL: 8,
ONONAME: 8, ONONAME: 8,
OOFFSETOF: 8, OOFFSETOF: 8,
OPACK: 8,
OPANIC: 8, OPANIC: 8,
OPAREN: 8, OPAREN: 8,
OPRINTN: 8, OPRINTN: 8,
@ -213,13 +212,7 @@ var OpPrec = []int{
OSTR2BYTES: 8, OSTR2BYTES: 8,
OSTR2RUNES: 8, OSTR2RUNES: 8,
OSTRUCTLIT: 8, OSTRUCTLIT: 8,
OTARRAY: 8,
OTSLICE: 8,
OTCHAN: 8,
OTFUNC: 8, OTFUNC: 8,
OTINTER: 8,
OTMAP: 8,
OTSTRUCT: 8,
OTYPE: 8, OTYPE: 8,
OUNSAFEADD: 8, OUNSAFEADD: 8,
OUNSAFESLICE: 8, OUNSAFESLICE: 8,
@ -640,7 +633,7 @@ func exprFmt(n Node, s fmt.State, prec int) {
return return
} }
fallthrough fallthrough
case OPACK, ONONAME: case ONONAME:
fmt.Fprint(s, n.Sym()) fmt.Fprint(s, n.Sym())
case OLINKSYMOFFSET: case OLINKSYMOFFSET:
@ -654,49 +647,6 @@ func exprFmt(n Node, s fmt.State, prec int) {
} }
fmt.Fprintf(s, "%v", n.Type()) fmt.Fprintf(s, "%v", n.Type())
case OTSLICE:
n := n.(*SliceType)
if n.DDD {
fmt.Fprintf(s, "...%v", n.Elem)
} else {
fmt.Fprintf(s, "[]%v", n.Elem) // happens before typecheck
}
case OTARRAY:
n := n.(*ArrayType)
if n.Len == nil {
fmt.Fprintf(s, "[...]%v", n.Elem)
} else {
fmt.Fprintf(s, "[%v]%v", n.Len, n.Elem)
}
case OTMAP:
n := n.(*MapType)
fmt.Fprintf(s, "map[%v]%v", n.Key, n.Elem)
case OTCHAN:
n := n.(*ChanType)
switch n.Dir {
case types.Crecv:
fmt.Fprintf(s, "<-chan %v", n.Elem)
case types.Csend:
fmt.Fprintf(s, "chan<- %v", n.Elem)
default:
if n.Elem != nil && n.Elem.Op() == OTCHAN && n.Elem.(*ChanType).Dir == types.Crecv {
fmt.Fprintf(s, "chan (%v)", n.Elem)
} else {
fmt.Fprintf(s, "chan %v", n.Elem)
}
}
case OTSTRUCT:
fmt.Fprint(s, "<struct>")
case OTINTER:
fmt.Fprint(s, "<inter>")
case OTFUNC: case OTFUNC:
fmt.Fprint(s, "<func>") fmt.Fprint(s, "<func>")

View file

@ -31,8 +31,7 @@ import (
// using a special data structure passed in a register. // using a special data structure passed in a register.
// //
// A method declaration is represented like functions, except f.Sym // A method declaration is represented like functions, except f.Sym
// will be the qualified method name (e.g., "T.m") and // will be the qualified method name (e.g., "T.m").
// f.Func.Shortname is the bare method name (e.g., "m").
// //
// A method expression (T.M) is represented as an OMETHEXPR node, // A method expression (T.M) is represented as an OMETHEXPR node,
// in which n.Left and n.Right point to the type and method, respectively. // in which n.Left and n.Right point to the type and method, respectively.
@ -56,8 +55,6 @@ type Func struct {
Nname *Name // ONAME node Nname *Name // ONAME node
OClosure *ClosureExpr // OCLOSURE node OClosure *ClosureExpr // OCLOSURE node
Shortname *types.Sym
// Extra entry code for the function. For example, allocate and initialize // Extra entry code for the function. For example, allocate and initialize
// memory for escaping parameters. // memory for escaping parameters.
Enter Nodes Enter Nodes
@ -133,6 +130,10 @@ type Func struct {
// function for go:nowritebarrierrec analysis. Only filled in // function for go:nowritebarrierrec analysis. Only filled in
// if nowritebarrierrecCheck != nil. // if nowritebarrierrecCheck != nil.
NWBRCalls *[]SymAndPos NWBRCalls *[]SymAndPos
// For wrapper functions, WrappedFunc point to the original Func.
// Currently only used for go/defer wrappers.
WrappedFunc *Func
} }
func NewFunc(pos src.XPos) *Func { func NewFunc(pos src.XPos) *Func {

View file

@ -48,7 +48,6 @@ type Name struct {
Opt interface{} // for use by escape analysis Opt interface{} // for use by escape analysis
Embed *[]Embed // list of embedded files, for ONAME var Embed *[]Embed // list of embedded files, for ONAME var
PkgName *PkgName // real package for import . names
// For a local variable (not param) or extern, the initializing assignment (OAS or OAS2). // For a local variable (not param) or extern, the initializing assignment (OAS or OAS2).
// For a closure var, the ONAME node of the outer captured variable. // For a closure var, the ONAME node of the outer captured variable.
// For the case-local variables of a type switch, the type switch guard (OTYPESW). // For the case-local variables of a type switch, the type switch guard (OTYPESW).
@ -536,22 +535,3 @@ type Embed struct {
Pos src.XPos Pos src.XPos
Patterns []string Patterns []string
} }
// A Pack is an identifier referring to an imported package.
type PkgName struct {
miniNode
sym *types.Sym
Pkg *types.Pkg
Used bool
}
func (p *PkgName) Sym() *types.Sym { return p.sym }
func (*PkgName) CanBeNtype() {}
func NewPkgName(pos src.XPos, sym *types.Sym, pkg *types.Pkg) *PkgName {
p := &PkgName{sym: sym, Pkg: pkg}
p.op = OPACK
p.pos = pos
return p
}

View file

@ -118,7 +118,6 @@ const (
// Also used for a qualified package identifier that hasn't been resolved yet. // Also used for a qualified package identifier that hasn't been resolved yet.
ONONAME ONONAME
OTYPE // type name OTYPE // type name
OPACK // import
OLITERAL // literal OLITERAL // literal
ONIL // nil ONIL // nil
@ -291,15 +290,10 @@ const (
OFUNCINST // instantiation of a generic function OFUNCINST // instantiation of a generic function
// types // types
OTCHAN // chan int
OTMAP // map[string]int
OTSTRUCT // struct{}
OTINTER // interface{}
// OTFUNC: func() - Recv is receiver field, Params is list of param fields, Results is // OTFUNC: func() - Recv is receiver field, Params is list of param fields, Results is
// list of result fields. // list of result fields.
// TODO(mdempsky): Remove.
OTFUNC OTFUNC
OTARRAY // [8]int or [...]int
OTSLICE // []int
// misc // misc
// intermediate representation of an inlined call. Uses Init (assignments // intermediate representation of an inlined call. Uses Init (assignments
@ -533,7 +527,7 @@ func HasNamedResults(fn *Func) bool {
// their usage position. // their usage position.
func HasUniquePos(n Node) bool { func HasUniquePos(n Node) bool {
switch n.Op() { switch n.Op() {
case ONAME, OPACK: case ONAME:
return false return false
case OLITERAL, ONIL, OTYPE: case OLITERAL, ONIL, OTYPE:
if n.Sym() != nil { if n.Sym() != nil {

View file

@ -59,29 +59,6 @@ func (n *AddrExpr) editChildren(edit func(Node) Node) {
} }
} }
func (n *ArrayType) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) }
func (n *ArrayType) copy() Node {
c := *n
return &c
}
func (n *ArrayType) doChildren(do func(Node) bool) bool {
if n.Len != nil && do(n.Len) {
return true
}
if n.Elem != nil && do(n.Elem) {
return true
}
return false
}
func (n *ArrayType) editChildren(edit func(Node) Node) {
if n.Len != nil {
n.Len = edit(n.Len).(Node)
}
if n.Elem != nil {
n.Elem = edit(n.Elem).(Ntype)
}
}
func (n *AssignListStmt) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) } func (n *AssignListStmt) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) }
func (n *AssignListStmt) copy() Node { func (n *AssignListStmt) copy() Node {
c := *n c := *n
@ -309,23 +286,6 @@ func (n *CaseClause) editChildren(edit func(Node) Node) {
editNodes(n.Body, edit) editNodes(n.Body, edit)
} }
func (n *ChanType) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) }
func (n *ChanType) copy() Node {
c := *n
return &c
}
func (n *ChanType) doChildren(do func(Node) bool) bool {
if n.Elem != nil && do(n.Elem) {
return true
}
return false
}
func (n *ChanType) editChildren(edit func(Node) Node) {
if n.Elem != nil {
n.Elem = edit(n.Elem).(Ntype)
}
}
func (n *ClosureExpr) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) } func (n *ClosureExpr) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) }
func (n *ClosureExpr) copy() Node { func (n *ClosureExpr) copy() Node {
c := *n c := *n
@ -752,22 +712,6 @@ func (n *InstExpr) editChildren(edit func(Node) Node) {
editNodes(n.Targs, edit) editNodes(n.Targs, edit)
} }
func (n *InterfaceType) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) }
func (n *InterfaceType) copy() Node {
c := *n
c.Methods = copyFields(c.Methods)
return &c
}
func (n *InterfaceType) doChildren(do func(Node) bool) bool {
if doFields(n.Methods, do) {
return true
}
return false
}
func (n *InterfaceType) editChildren(edit func(Node) Node) {
editFields(n.Methods, edit)
}
func (n *KeyExpr) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) } func (n *KeyExpr) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) }
func (n *KeyExpr) copy() Node { func (n *KeyExpr) copy() Node {
c := *n c := *n
@ -884,29 +828,6 @@ func (n *MakeExpr) editChildren(edit func(Node) Node) {
} }
} }
func (n *MapType) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) }
func (n *MapType) copy() Node {
c := *n
return &c
}
func (n *MapType) doChildren(do func(Node) bool) bool {
if n.Key != nil && do(n.Key) {
return true
}
if n.Elem != nil && do(n.Elem) {
return true
}
return false
}
func (n *MapType) editChildren(edit func(Node) Node) {
if n.Key != nil {
n.Key = edit(n.Key).(Ntype)
}
if n.Elem != nil {
n.Elem = edit(n.Elem).(Ntype)
}
}
func (n *Name) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) } func (n *Name) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) }
func (n *NilExpr) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) } func (n *NilExpr) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) }
@ -947,17 +868,6 @@ func (n *ParenExpr) editChildren(edit func(Node) Node) {
} }
} }
func (n *PkgName) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) }
func (n *PkgName) copy() Node {
c := *n
return &c
}
func (n *PkgName) doChildren(do func(Node) bool) bool {
return false
}
func (n *PkgName) editChildren(edit func(Node) Node) {
}
func (n *RangeStmt) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) } func (n *RangeStmt) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) }
func (n *RangeStmt) copy() Node { func (n *RangeStmt) copy() Node {
c := *n c := *n
@ -1212,23 +1122,6 @@ func (n *SliceHeaderExpr) editChildren(edit func(Node) Node) {
} }
} }
func (n *SliceType) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) }
func (n *SliceType) copy() Node {
c := *n
return &c
}
func (n *SliceType) doChildren(do func(Node) bool) bool {
if n.Elem != nil && do(n.Elem) {
return true
}
return false
}
func (n *SliceType) editChildren(edit func(Node) Node) {
if n.Elem != nil {
n.Elem = edit(n.Elem).(Ntype)
}
}
func (n *StarExpr) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) } func (n *StarExpr) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) }
func (n *StarExpr) copy() Node { func (n *StarExpr) copy() Node {
c := *n c := *n
@ -1273,22 +1166,6 @@ func (n *StructKeyExpr) editChildren(edit func(Node) Node) {
} }
} }
func (n *StructType) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) }
func (n *StructType) copy() Node {
c := *n
c.Fields = copyFields(c.Fields)
return &c
}
func (n *StructType) doChildren(do func(Node) bool) bool {
if doFields(n.Fields, do) {
return true
}
return false
}
func (n *StructType) editChildren(edit func(Node) Node) {
editFields(n.Fields, edit)
}
func (n *SwitchStmt) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) } func (n *SwitchStmt) Format(s fmt.State, verb rune) { fmtNode(n, s, verb) }
func (n *SwitchStmt) copy() Node { func (n *SwitchStmt) copy() Node {
c := *n c := *n

View file

@ -12,169 +12,162 @@ func _() {
_ = x[ONAME-1] _ = x[ONAME-1]
_ = x[ONONAME-2] _ = x[ONONAME-2]
_ = x[OTYPE-3] _ = x[OTYPE-3]
_ = x[OPACK-4] _ = x[OLITERAL-4]
_ = x[OLITERAL-5] _ = x[ONIL-5]
_ = x[ONIL-6] _ = x[OADD-6]
_ = x[OADD-7] _ = x[OSUB-7]
_ = x[OSUB-8] _ = x[OOR-8]
_ = x[OOR-9] _ = x[OXOR-9]
_ = x[OXOR-10] _ = x[OADDSTR-10]
_ = x[OADDSTR-11] _ = x[OADDR-11]
_ = x[OADDR-12] _ = x[OANDAND-12]
_ = x[OANDAND-13] _ = x[OAPPEND-13]
_ = x[OAPPEND-14] _ = x[OBYTES2STR-14]
_ = x[OBYTES2STR-15] _ = x[OBYTES2STRTMP-15]
_ = x[OBYTES2STRTMP-16] _ = x[ORUNES2STR-16]
_ = x[ORUNES2STR-17] _ = x[OSTR2BYTES-17]
_ = x[OSTR2BYTES-18] _ = x[OSTR2BYTESTMP-18]
_ = x[OSTR2BYTESTMP-19] _ = x[OSTR2RUNES-19]
_ = x[OSTR2RUNES-20] _ = x[OSLICE2ARRPTR-20]
_ = x[OSLICE2ARRPTR-21] _ = x[OAS-21]
_ = x[OAS-22] _ = x[OAS2-22]
_ = x[OAS2-23] _ = x[OAS2DOTTYPE-23]
_ = x[OAS2DOTTYPE-24] _ = x[OAS2FUNC-24]
_ = x[OAS2FUNC-25] _ = x[OAS2MAPR-25]
_ = x[OAS2MAPR-26] _ = x[OAS2RECV-26]
_ = x[OAS2RECV-27] _ = x[OASOP-27]
_ = x[OASOP-28] _ = x[OCALL-28]
_ = x[OCALL-29] _ = x[OCALLFUNC-29]
_ = x[OCALLFUNC-30] _ = x[OCALLMETH-30]
_ = x[OCALLMETH-31] _ = x[OCALLINTER-31]
_ = x[OCALLINTER-32] _ = x[OCAP-32]
_ = x[OCAP-33] _ = x[OCLOSE-33]
_ = x[OCLOSE-34] _ = x[OCLOSURE-34]
_ = x[OCLOSURE-35] _ = x[OCOMPLIT-35]
_ = x[OCOMPLIT-36] _ = x[OMAPLIT-36]
_ = x[OMAPLIT-37] _ = x[OSTRUCTLIT-37]
_ = x[OSTRUCTLIT-38] _ = x[OARRAYLIT-38]
_ = x[OARRAYLIT-39] _ = x[OSLICELIT-39]
_ = x[OSLICELIT-40] _ = x[OPTRLIT-40]
_ = x[OPTRLIT-41] _ = x[OCONV-41]
_ = x[OCONV-42] _ = x[OCONVIFACE-42]
_ = x[OCONVIFACE-43] _ = x[OCONVIDATA-43]
_ = x[OCONVIDATA-44] _ = x[OCONVNOP-44]
_ = x[OCONVNOP-45] _ = x[OCOPY-45]
_ = x[OCOPY-46] _ = x[ODCL-46]
_ = x[ODCL-47] _ = x[ODCLFUNC-47]
_ = x[ODCLFUNC-48] _ = x[ODCLCONST-48]
_ = x[ODCLCONST-49] _ = x[ODCLTYPE-49]
_ = x[ODCLTYPE-50] _ = x[ODELETE-50]
_ = x[ODELETE-51] _ = x[ODOT-51]
_ = x[ODOT-52] _ = x[ODOTPTR-52]
_ = x[ODOTPTR-53] _ = x[ODOTMETH-53]
_ = x[ODOTMETH-54] _ = x[ODOTINTER-54]
_ = x[ODOTINTER-55] _ = x[OXDOT-55]
_ = x[OXDOT-56] _ = x[ODOTTYPE-56]
_ = x[ODOTTYPE-57] _ = x[ODOTTYPE2-57]
_ = x[ODOTTYPE2-58] _ = x[OEQ-58]
_ = x[OEQ-59] _ = x[ONE-59]
_ = x[ONE-60] _ = x[OLT-60]
_ = x[OLT-61] _ = x[OLE-61]
_ = x[OLE-62] _ = x[OGE-62]
_ = x[OGE-63] _ = x[OGT-63]
_ = x[OGT-64] _ = x[ODEREF-64]
_ = x[ODEREF-65] _ = x[OINDEX-65]
_ = x[OINDEX-66] _ = x[OINDEXMAP-66]
_ = x[OINDEXMAP-67] _ = x[OKEY-67]
_ = x[OKEY-68] _ = x[OSTRUCTKEY-68]
_ = x[OSTRUCTKEY-69] _ = x[OLEN-69]
_ = x[OLEN-70] _ = x[OMAKE-70]
_ = x[OMAKE-71] _ = x[OMAKECHAN-71]
_ = x[OMAKECHAN-72] _ = x[OMAKEMAP-72]
_ = x[OMAKEMAP-73] _ = x[OMAKESLICE-73]
_ = x[OMAKESLICE-74] _ = x[OMAKESLICECOPY-74]
_ = x[OMAKESLICECOPY-75] _ = x[OMUL-75]
_ = x[OMUL-76] _ = x[ODIV-76]
_ = x[ODIV-77] _ = x[OMOD-77]
_ = x[OMOD-78] _ = x[OLSH-78]
_ = x[OLSH-79] _ = x[ORSH-79]
_ = x[ORSH-80] _ = x[OAND-80]
_ = x[OAND-81] _ = x[OANDNOT-81]
_ = x[OANDNOT-82] _ = x[ONEW-82]
_ = x[ONEW-83] _ = x[ONOT-83]
_ = x[ONOT-84] _ = x[OBITNOT-84]
_ = x[OBITNOT-85] _ = x[OPLUS-85]
_ = x[OPLUS-86] _ = x[ONEG-86]
_ = x[ONEG-87] _ = x[OOROR-87]
_ = x[OOROR-88] _ = x[OPANIC-88]
_ = x[OPANIC-89] _ = x[OPRINT-89]
_ = x[OPRINT-90] _ = x[OPRINTN-90]
_ = x[OPRINTN-91] _ = x[OPAREN-91]
_ = x[OPAREN-92] _ = x[OSEND-92]
_ = x[OSEND-93] _ = x[OSLICE-93]
_ = x[OSLICE-94] _ = x[OSLICEARR-94]
_ = x[OSLICEARR-95] _ = x[OSLICESTR-95]
_ = x[OSLICESTR-96] _ = x[OSLICE3-96]
_ = x[OSLICE3-97] _ = x[OSLICE3ARR-97]
_ = x[OSLICE3ARR-98] _ = x[OSLICEHEADER-98]
_ = x[OSLICEHEADER-99] _ = x[ORECOVER-99]
_ = x[ORECOVER-100] _ = x[ORECOVERFP-100]
_ = x[ORECOVERFP-101] _ = x[ORECV-101]
_ = x[ORECV-102] _ = x[ORUNESTR-102]
_ = x[ORUNESTR-103] _ = x[OSELRECV2-103]
_ = x[OSELRECV2-104] _ = x[OIOTA-104]
_ = x[OIOTA-105] _ = x[OREAL-105]
_ = x[OREAL-106] _ = x[OIMAG-106]
_ = x[OIMAG-107] _ = x[OCOMPLEX-107]
_ = x[OCOMPLEX-108] _ = x[OALIGNOF-108]
_ = x[OALIGNOF-109] _ = x[OOFFSETOF-109]
_ = x[OOFFSETOF-110] _ = x[OSIZEOF-110]
_ = x[OSIZEOF-111] _ = x[OUNSAFEADD-111]
_ = x[OUNSAFEADD-112] _ = x[OUNSAFESLICE-112]
_ = x[OUNSAFESLICE-113] _ = x[OMETHEXPR-113]
_ = x[OMETHEXPR-114] _ = x[OMETHVALUE-114]
_ = x[OMETHVALUE-115] _ = x[OBLOCK-115]
_ = x[OBLOCK-116] _ = x[OBREAK-116]
_ = x[OBREAK-117] _ = x[OCASE-117]
_ = x[OCASE-118] _ = x[OCONTINUE-118]
_ = x[OCONTINUE-119] _ = x[ODEFER-119]
_ = x[ODEFER-120] _ = x[OFALL-120]
_ = x[OFALL-121] _ = x[OFOR-121]
_ = x[OFOR-122] _ = x[OFORUNTIL-122]
_ = x[OFORUNTIL-123] _ = x[OGOTO-123]
_ = x[OGOTO-124] _ = x[OIF-124]
_ = x[OIF-125] _ = x[OLABEL-125]
_ = x[OLABEL-126] _ = x[OGO-126]
_ = x[OGO-127] _ = x[ORANGE-127]
_ = x[ORANGE-128] _ = x[ORETURN-128]
_ = x[ORETURN-129] _ = x[OSELECT-129]
_ = x[OSELECT-130] _ = x[OSWITCH-130]
_ = x[OSWITCH-131] _ = x[OTYPESW-131]
_ = x[OTYPESW-132] _ = x[OFUNCINST-132]
_ = x[OFUNCINST-133] _ = x[OTFUNC-133]
_ = x[OTCHAN-134] _ = x[OINLCALL-134]
_ = x[OTMAP-135] _ = x[OEFACE-135]
_ = x[OTSTRUCT-136] _ = x[OITAB-136]
_ = x[OTINTER-137] _ = x[OIDATA-137]
_ = x[OTFUNC-138] _ = x[OSPTR-138]
_ = x[OTARRAY-139] _ = x[OCFUNC-139]
_ = x[OTSLICE-140] _ = x[OCHECKNIL-140]
_ = x[OINLCALL-141] _ = x[OVARDEF-141]
_ = x[OEFACE-142] _ = x[OVARKILL-142]
_ = x[OITAB-143] _ = x[OVARLIVE-143]
_ = x[OIDATA-144] _ = x[ORESULT-144]
_ = x[OSPTR-145] _ = x[OINLMARK-145]
_ = x[OCFUNC-146] _ = x[OLINKSYMOFFSET-146]
_ = x[OCHECKNIL-147] _ = x[ODYNAMICDOTTYPE-147]
_ = x[OVARDEF-148] _ = x[ODYNAMICDOTTYPE2-148]
_ = x[OVARKILL-149] _ = x[ODYNAMICTYPE-149]
_ = x[OVARLIVE-150] _ = x[OTAILCALL-150]
_ = x[ORESULT-151] _ = x[OGETG-151]
_ = x[OINLMARK-152] _ = x[OGETCALLERPC-152]
_ = x[OLINKSYMOFFSET-153] _ = x[OGETCALLERSP-153]
_ = x[ODYNAMICDOTTYPE-154] _ = x[OEND-154]
_ = x[ODYNAMICDOTTYPE2-155]
_ = x[ODYNAMICTYPE-156]
_ = x[OTAILCALL-157]
_ = x[OGETG-158]
_ = x[OGETCALLERPC-159]
_ = x[OGETCALLERSP-160]
_ = x[OEND-161]
} }
const _Op_name = "XXXNAMENONAMETYPEPACKLITERALNILADDSUBORXORADDSTRADDRANDANDAPPENDBYTES2STRBYTES2STRTMPRUNES2STRSTR2BYTESSTR2BYTESTMPSTR2RUNESSLICE2ARRPTRASAS2AS2DOTTYPEAS2FUNCAS2MAPRAS2RECVASOPCALLCALLFUNCCALLMETHCALLINTERCAPCLOSECLOSURECOMPLITMAPLITSTRUCTLITARRAYLITSLICELITPTRLITCONVCONVIFACECONVIDATACONVNOPCOPYDCLDCLFUNCDCLCONSTDCLTYPEDELETEDOTDOTPTRDOTMETHDOTINTERXDOTDOTTYPEDOTTYPE2EQNELTLEGEGTDEREFINDEXINDEXMAPKEYSTRUCTKEYLENMAKEMAKECHANMAKEMAPMAKESLICEMAKESLICECOPYMULDIVMODLSHRSHANDANDNOTNEWNOTBITNOTPLUSNEGORORPANICPRINTPRINTNPARENSENDSLICESLICEARRSLICESTRSLICE3SLICE3ARRSLICEHEADERRECOVERRECOVERFPRECVRUNESTRSELRECV2IOTAREALIMAGCOMPLEXALIGNOFOFFSETOFSIZEOFUNSAFEADDUNSAFESLICEMETHEXPRMETHVALUEBLOCKBREAKCASECONTINUEDEFERFALLFORFORUNTILGOTOIFLABELGORANGERETURNSELECTSWITCHTYPESWFUNCINSTTCHANTMAPTSTRUCTTINTERTFUNCTARRAYTSLICEINLCALLEFACEITABIDATASPTRCFUNCCHECKNILVARDEFVARKILLVARLIVERESULTINLMARKLINKSYMOFFSETDYNAMICDOTTYPEDYNAMICDOTTYPE2DYNAMICTYPETAILCALLGETGGETCALLERPCGETCALLERSPEND" const _Op_name = "XXXNAMENONAMETYPELITERALNILADDSUBORXORADDSTRADDRANDANDAPPENDBYTES2STRBYTES2STRTMPRUNES2STRSTR2BYTESSTR2BYTESTMPSTR2RUNESSLICE2ARRPTRASAS2AS2DOTTYPEAS2FUNCAS2MAPRAS2RECVASOPCALLCALLFUNCCALLMETHCALLINTERCAPCLOSECLOSURECOMPLITMAPLITSTRUCTLITARRAYLITSLICELITPTRLITCONVCONVIFACECONVIDATACONVNOPCOPYDCLDCLFUNCDCLCONSTDCLTYPEDELETEDOTDOTPTRDOTMETHDOTINTERXDOTDOTTYPEDOTTYPE2EQNELTLEGEGTDEREFINDEXINDEXMAPKEYSTRUCTKEYLENMAKEMAKECHANMAKEMAPMAKESLICEMAKESLICECOPYMULDIVMODLSHRSHANDANDNOTNEWNOTBITNOTPLUSNEGORORPANICPRINTPRINTNPARENSENDSLICESLICEARRSLICESTRSLICE3SLICE3ARRSLICEHEADERRECOVERRECOVERFPRECVRUNESTRSELRECV2IOTAREALIMAGCOMPLEXALIGNOFOFFSETOFSIZEOFUNSAFEADDUNSAFESLICEMETHEXPRMETHVALUEBLOCKBREAKCASECONTINUEDEFERFALLFORFORUNTILGOTOIFLABELGORANGERETURNSELECTSWITCHTYPESWFUNCINSTTFUNCINLCALLEFACEITABIDATASPTRCFUNCCHECKNILVARDEFVARKILLVARLIVERESULTINLMARKLINKSYMOFFSETDYNAMICDOTTYPEDYNAMICDOTTYPE2DYNAMICTYPETAILCALLGETGGETCALLERPCGETCALLERSPEND"
var _Op_index = [...]uint16{0, 3, 7, 13, 17, 21, 28, 31, 34, 37, 39, 42, 48, 52, 58, 64, 73, 85, 94, 103, 115, 124, 136, 138, 141, 151, 158, 165, 172, 176, 180, 188, 196, 205, 208, 213, 220, 227, 233, 242, 250, 258, 264, 268, 277, 286, 293, 297, 300, 307, 315, 322, 328, 331, 337, 344, 352, 356, 363, 371, 373, 375, 377, 379, 381, 383, 388, 393, 401, 404, 413, 416, 420, 428, 435, 444, 457, 460, 463, 466, 469, 472, 475, 481, 484, 487, 493, 497, 500, 504, 509, 514, 520, 525, 529, 534, 542, 550, 556, 565, 576, 583, 592, 596, 603, 611, 615, 619, 623, 630, 637, 645, 651, 660, 671, 679, 688, 693, 698, 702, 710, 715, 719, 722, 730, 734, 736, 741, 743, 748, 754, 760, 766, 772, 780, 785, 789, 796, 802, 807, 813, 819, 826, 831, 835, 840, 844, 849, 857, 863, 870, 877, 883, 890, 903, 917, 932, 943, 951, 955, 966, 977, 980} var _Op_index = [...]uint16{0, 3, 7, 13, 17, 24, 27, 30, 33, 35, 38, 44, 48, 54, 60, 69, 81, 90, 99, 111, 120, 132, 134, 137, 147, 154, 161, 168, 172, 176, 184, 192, 201, 204, 209, 216, 223, 229, 238, 246, 254, 260, 264, 273, 282, 289, 293, 296, 303, 311, 318, 324, 327, 333, 340, 348, 352, 359, 367, 369, 371, 373, 375, 377, 379, 384, 389, 397, 400, 409, 412, 416, 424, 431, 440, 453, 456, 459, 462, 465, 468, 471, 477, 480, 483, 489, 493, 496, 500, 505, 510, 516, 521, 525, 530, 538, 546, 552, 561, 572, 579, 588, 592, 599, 607, 611, 615, 619, 626, 633, 641, 647, 656, 667, 675, 684, 689, 694, 698, 706, 711, 715, 718, 726, 730, 732, 737, 739, 744, 750, 756, 762, 768, 776, 781, 788, 793, 797, 802, 806, 811, 819, 825, 832, 839, 845, 852, 865, 879, 894, 905, 913, 917, 928, 939, 942}
func (i Op) String() string { func (i Op) String() string {
if i >= Op(len(_Op_index)-1) { if i >= Op(len(_Op_index)-1) {

View file

@ -21,7 +21,7 @@ func TestSizeof(t *testing.T) {
_64bit uintptr // size on 64bit platforms _64bit uintptr // size on 64bit platforms
}{ }{
{Func{}, 192, 328}, {Func{}, 192, 328},
{Name{}, 112, 200}, {Name{}, 108, 192},
} }
for _, tt := range tests { for _, tt := range tests {

View file

@ -58,81 +58,6 @@ func (n *miniType) setOTYPE(t *types.Type, self Ntype) {
func (n *miniType) Sym() *types.Sym { return nil } // for Format OTYPE func (n *miniType) Sym() *types.Sym { return nil } // for Format OTYPE
func (n *miniType) Implicit() bool { return false } // for Format OTYPE func (n *miniType) Implicit() bool { return false } // for Format OTYPE
// A ChanType represents a chan Elem syntax with the direction Dir.
type ChanType struct {
miniType
Elem Ntype
Dir types.ChanDir
}
func NewChanType(pos src.XPos, elem Ntype, dir types.ChanDir) *ChanType {
n := &ChanType{Elem: elem, Dir: dir}
n.op = OTCHAN
n.pos = pos
return n
}
func (n *ChanType) SetOTYPE(t *types.Type) {
n.setOTYPE(t, n)
n.Elem = nil
}
// A MapType represents a map[Key]Value type syntax.
type MapType struct {
miniType
Key Ntype
Elem Ntype
}
func NewMapType(pos src.XPos, key, elem Ntype) *MapType {
n := &MapType{Key: key, Elem: elem}
n.op = OTMAP
n.pos = pos
return n
}
func (n *MapType) SetOTYPE(t *types.Type) {
n.setOTYPE(t, n)
n.Key = nil
n.Elem = nil
}
// A StructType represents a struct { ... } type syntax.
type StructType struct {
miniType
Fields []*Field
}
func NewStructType(pos src.XPos, fields []*Field) *StructType {
n := &StructType{Fields: fields}
n.op = OTSTRUCT
n.pos = pos
return n
}
func (n *StructType) SetOTYPE(t *types.Type) {
n.setOTYPE(t, n)
n.Fields = nil
}
// An InterfaceType represents a struct { ... } type syntax.
type InterfaceType struct {
miniType
Methods []*Field
}
func NewInterfaceType(pos src.XPos, methods []*Field) *InterfaceType {
n := &InterfaceType{Methods: methods}
n.op = OTINTER
n.pos = pos
return n
}
func (n *InterfaceType) SetOTYPE(t *types.Type) {
n.setOTYPE(t, n)
n.Methods = nil
}
// A FuncType represents a func(Args) Results type syntax. // A FuncType represents a func(Args) Results type syntax.
type FuncType struct { type FuncType struct {
miniType miniType
@ -240,47 +165,6 @@ func editFields(list []*Field, edit func(Node) Node) {
} }
} }
// A SliceType represents a []Elem type syntax.
// If DDD is true, it's the ...Elem at the end of a function list.
type SliceType struct {
miniType
Elem Ntype
DDD bool
}
func NewSliceType(pos src.XPos, elem Ntype) *SliceType {
n := &SliceType{Elem: elem}
n.op = OTSLICE
n.pos = pos
return n
}
func (n *SliceType) SetOTYPE(t *types.Type) {
n.setOTYPE(t, n)
n.Elem = nil
}
// An ArrayType represents a [Len]Elem type syntax.
// If Len is nil, the type is a [...]Elem in an array literal.
type ArrayType struct {
miniType
Len Node
Elem Ntype
}
func NewArrayType(pos src.XPos, len Node, elem Ntype) *ArrayType {
n := &ArrayType{Len: len, Elem: elem}
n.op = OTARRAY
n.pos = pos
return n
}
func (n *ArrayType) SetOTYPE(t *types.Type) {
n.setOTYPE(t, n)
n.Len = nil
n.Elem = nil
}
// A typeNode is a Node wrapper for type t. // A typeNode is a Node wrapper for type t.
type typeNode struct { type typeNode struct {
miniNode miniNode

View file

@ -6,63 +6,12 @@
package noder package noder
type code interface { import "internal/pkgbits"
marker() syncMarker
value() int
}
type codeVal int
func (c codeVal) marker() syncMarker { return syncVal }
func (c codeVal) value() int { return int(c) }
const (
valBool codeVal = iota
valString
valInt64
valBigInt
valBigRat
valBigFloat
)
type codeType int
func (c codeType) marker() syncMarker { return syncType }
func (c codeType) value() int { return int(c) }
const (
typeBasic codeType = iota
typeNamed
typePointer
typeSlice
typeArray
typeChan
typeMap
typeSignature
typeStruct
typeInterface
typeUnion
typeTypeParam
)
type codeObj int
func (c codeObj) marker() syncMarker { return syncCodeObj }
func (c codeObj) value() int { return int(c) }
const (
objAlias codeObj = iota
objConst
objType
objFunc
objVar
objStub
)
type codeStmt int type codeStmt int
func (c codeStmt) marker() syncMarker { return syncStmt1 } func (c codeStmt) Marker() pkgbits.SyncMarker { return pkgbits.SyncStmt1 }
func (c codeStmt) value() int { return int(c) } func (c codeStmt) Value() int { return int(c) }
const ( const (
stmtEnd codeStmt = iota stmtEnd codeStmt = iota
@ -87,8 +36,8 @@ const (
type codeExpr int type codeExpr int
func (c codeExpr) marker() syncMarker { return syncExpr } func (c codeExpr) Marker() pkgbits.SyncMarker { return pkgbits.SyncExpr }
func (c codeExpr) value() int { return int(c) } func (c codeExpr) Value() int { return int(c) }
// TODO(mdempsky): Split expr into addr, for lvalues. // TODO(mdempsky): Split expr into addr, for lvalues.
const ( const (
@ -112,8 +61,8 @@ const (
type codeDecl int type codeDecl int
func (c codeDecl) marker() syncMarker { return syncDecl } func (c codeDecl) Marker() pkgbits.SyncMarker { return pkgbits.SyncDecl }
func (c codeDecl) value() int { return int(c) } func (c codeDecl) Value() int { return int(c) }
const ( const (
declEnd codeDecl = iota declEnd codeDecl = iota

View file

@ -1,302 +0,0 @@
// UNREVIEWED
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package noder
import (
"encoding/binary"
"fmt"
"go/constant"
"go/token"
"math/big"
"os"
"runtime"
"strings"
"cmd/compile/internal/base"
)
type pkgDecoder struct {
pkgPath string
elemEndsEnds [numRelocs]uint32
elemEnds []uint32
elemData string
}
func newPkgDecoder(pkgPath, input string) pkgDecoder {
pr := pkgDecoder{
pkgPath: pkgPath,
}
// TODO(mdempsky): Implement direct indexing of input string to
// avoid copying the position information.
r := strings.NewReader(input)
assert(binary.Read(r, binary.LittleEndian, pr.elemEndsEnds[:]) == nil)
pr.elemEnds = make([]uint32, pr.elemEndsEnds[len(pr.elemEndsEnds)-1])
assert(binary.Read(r, binary.LittleEndian, pr.elemEnds[:]) == nil)
pos, err := r.Seek(0, os.SEEK_CUR)
assert(err == nil)
pr.elemData = input[pos:]
assert(len(pr.elemData) == int(pr.elemEnds[len(pr.elemEnds)-1]))
return pr
}
func (pr *pkgDecoder) numElems(k reloc) int {
count := int(pr.elemEndsEnds[k])
if k > 0 {
count -= int(pr.elemEndsEnds[k-1])
}
return count
}
func (pr *pkgDecoder) totalElems() int {
return len(pr.elemEnds)
}
func (pr *pkgDecoder) absIdx(k reloc, idx int) int {
absIdx := idx
if k > 0 {
absIdx += int(pr.elemEndsEnds[k-1])
}
if absIdx >= int(pr.elemEndsEnds[k]) {
base.Fatalf("%v:%v is out of bounds; %v", k, idx, pr.elemEndsEnds)
}
return absIdx
}
func (pr *pkgDecoder) dataIdx(k reloc, idx int) string {
absIdx := pr.absIdx(k, idx)
var start uint32
if absIdx > 0 {
start = pr.elemEnds[absIdx-1]
}
end := pr.elemEnds[absIdx]
return pr.elemData[start:end]
}
func (pr *pkgDecoder) stringIdx(idx int) string {
return pr.dataIdx(relocString, idx)
}
func (pr *pkgDecoder) newDecoder(k reloc, idx int, marker syncMarker) decoder {
r := pr.newDecoderRaw(k, idx)
r.sync(marker)
return r
}
func (pr *pkgDecoder) newDecoderRaw(k reloc, idx int) decoder {
r := decoder{
common: pr,
k: k,
idx: idx,
}
// TODO(mdempsky) r.data.Reset(...) after #44505 is resolved.
r.data = *strings.NewReader(pr.dataIdx(k, idx))
r.sync(syncRelocs)
r.relocs = make([]relocEnt, r.len())
for i := range r.relocs {
r.sync(syncReloc)
r.relocs[i] = relocEnt{reloc(r.len()), r.len()}
}
return r
}
type decoder struct {
common *pkgDecoder
relocs []relocEnt
data strings.Reader
k reloc
idx int
}
func (r *decoder) checkErr(err error) {
if err != nil {
base.Fatalf("unexpected error: %v", err)
}
}
func (r *decoder) rawUvarint() uint64 {
x, err := binary.ReadUvarint(&r.data)
r.checkErr(err)
return x
}
func (r *decoder) rawVarint() int64 {
ux := r.rawUvarint()
// Zig-zag decode.
x := int64(ux >> 1)
if ux&1 != 0 {
x = ^x
}
return x
}
func (r *decoder) rawReloc(k reloc, idx int) int {
e := r.relocs[idx]
assert(e.kind == k)
return e.idx
}
func (r *decoder) sync(mWant syncMarker) {
if !enableSync {
return
}
pos, _ := r.data.Seek(0, os.SEEK_CUR) // TODO(mdempsky): io.SeekCurrent after #44505 is resolved
mHave := syncMarker(r.rawUvarint())
writerPCs := make([]int, r.rawUvarint())
for i := range writerPCs {
writerPCs[i] = int(r.rawUvarint())
}
if mHave == mWant {
return
}
// There's some tension here between printing:
//
// (1) full file paths that tools can recognize (e.g., so emacs
// hyperlinks the "file:line" text for easy navigation), or
//
// (2) short file paths that are easier for humans to read (e.g., by
// omitting redundant or irrelevant details, so it's easier to
// focus on the useful bits that remain).
//
// The current formatting favors the former, as it seems more
// helpful in practice. But perhaps the formatting could be improved
// to better address both concerns. For example, use relative file
// paths if they would be shorter, or rewrite file paths to contain
// "$GOROOT" (like objabi.AbsFile does) if tools can be taught how
// to reliably expand that again.
fmt.Printf("export data desync: package %q, section %v, index %v, offset %v\n", r.common.pkgPath, r.k, r.idx, pos)
fmt.Printf("\nfound %v, written at:\n", mHave)
if len(writerPCs) == 0 {
fmt.Printf("\t[stack trace unavailable; recompile package %q with -d=syncframes]\n", r.common.pkgPath)
}
for _, pc := range writerPCs {
fmt.Printf("\t%s\n", r.common.stringIdx(r.rawReloc(relocString, pc)))
}
fmt.Printf("\nexpected %v, reading at:\n", mWant)
var readerPCs [32]uintptr // TODO(mdempsky): Dynamically size?
n := runtime.Callers(2, readerPCs[:])
for _, pc := range fmtFrames(readerPCs[:n]...) {
fmt.Printf("\t%s\n", pc)
}
// We already printed a stack trace for the reader, so now we can
// simply exit. Printing a second one with panic or base.Fatalf
// would just be noise.
os.Exit(1)
}
func (r *decoder) bool() bool {
r.sync(syncBool)
x, err := r.data.ReadByte()
r.checkErr(err)
assert(x < 2)
return x != 0
}
func (r *decoder) int64() int64 {
r.sync(syncInt64)
return r.rawVarint()
}
func (r *decoder) uint64() uint64 {
r.sync(syncUint64)
return r.rawUvarint()
}
func (r *decoder) len() int { x := r.uint64(); v := int(x); assert(uint64(v) == x); return v }
func (r *decoder) int() int { x := r.int64(); v := int(x); assert(int64(v) == x); return v }
func (r *decoder) uint() uint { x := r.uint64(); v := uint(x); assert(uint64(v) == x); return v }
func (r *decoder) code(mark syncMarker) int {
r.sync(mark)
return r.len()
}
func (r *decoder) reloc(k reloc) int {
r.sync(syncUseReloc)
return r.rawReloc(k, r.len())
}
func (r *decoder) string() string {
r.sync(syncString)
return r.common.stringIdx(r.reloc(relocString))
}
func (r *decoder) strings() []string {
res := make([]string, r.len())
for i := range res {
res[i] = r.string()
}
return res
}
func (r *decoder) value() constant.Value {
r.sync(syncValue)
isComplex := r.bool()
val := r.scalar()
if isComplex {
val = constant.BinaryOp(val, token.ADD, constant.MakeImag(r.scalar()))
}
return val
}
func (r *decoder) scalar() constant.Value {
switch tag := codeVal(r.code(syncVal)); tag {
default:
panic(fmt.Sprintf("unexpected scalar tag: %v", tag))
case valBool:
return constant.MakeBool(r.bool())
case valString:
return constant.MakeString(r.string())
case valInt64:
return constant.MakeInt64(r.int64())
case valBigInt:
return constant.Make(r.bigInt())
case valBigRat:
num := r.bigInt()
denom := r.bigInt()
return constant.Make(new(big.Rat).SetFrac(num, denom))
case valBigFloat:
return constant.Make(r.bigFloat())
}
}
func (r *decoder) bigInt() *big.Int {
v := new(big.Int).SetBytes([]byte(r.string()))
if r.bool() {
v.Neg(v)
}
return v
}
func (r *decoder) bigFloat() *big.Float {
v := new(big.Float).SetPrec(512)
assert(v.UnmarshalText([]byte(r.string())) == nil)
return v
}

View file

@ -1,285 +0,0 @@
// UNREVIEWED
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package noder
import (
"bytes"
"encoding/binary"
"fmt"
"go/constant"
"io"
"math/big"
"runtime"
"cmd/compile/internal/base"
)
type pkgEncoder struct {
elems [numRelocs][]string
stringsIdx map[string]int
}
func newPkgEncoder() pkgEncoder {
return pkgEncoder{
stringsIdx: make(map[string]int),
}
}
func (pw *pkgEncoder) dump(out io.Writer) {
writeUint32 := func(x uint32) {
assert(binary.Write(out, binary.LittleEndian, x) == nil)
}
var sum uint32
for _, elems := range &pw.elems {
sum += uint32(len(elems))
writeUint32(sum)
}
sum = 0
for _, elems := range &pw.elems {
for _, elem := range elems {
sum += uint32(len(elem))
writeUint32(sum)
}
}
for _, elems := range &pw.elems {
for _, elem := range elems {
_, err := io.WriteString(out, elem)
assert(err == nil)
}
}
}
func (pw *pkgEncoder) stringIdx(s string) int {
if idx, ok := pw.stringsIdx[s]; ok {
assert(pw.elems[relocString][idx] == s)
return idx
}
idx := len(pw.elems[relocString])
pw.elems[relocString] = append(pw.elems[relocString], s)
pw.stringsIdx[s] = idx
return idx
}
func (pw *pkgEncoder) newEncoder(k reloc, marker syncMarker) encoder {
e := pw.newEncoderRaw(k)
e.sync(marker)
return e
}
func (pw *pkgEncoder) newEncoderRaw(k reloc) encoder {
idx := len(pw.elems[k])
pw.elems[k] = append(pw.elems[k], "") // placeholder
return encoder{
p: pw,
k: k,
idx: idx,
}
}
// Encoders
type encoder struct {
p *pkgEncoder
relocs []relocEnt
data bytes.Buffer
encodingRelocHeader bool
k reloc
idx int
}
func (w *encoder) flush() int {
var sb bytes.Buffer // TODO(mdempsky): strings.Builder after #44505 is resolved
// Backup the data so we write the relocations at the front.
var tmp bytes.Buffer
io.Copy(&tmp, &w.data)
// TODO(mdempsky): Consider writing these out separately so they're
// easier to strip, along with function bodies, so that we can prune
// down to just the data that's relevant to go/types.
if w.encodingRelocHeader {
base.Fatalf("encodingRelocHeader already true; recursive flush?")
}
w.encodingRelocHeader = true
w.sync(syncRelocs)
w.len(len(w.relocs))
for _, rent := range w.relocs {
w.sync(syncReloc)
w.len(int(rent.kind))
w.len(rent.idx)
}
io.Copy(&sb, &w.data)
io.Copy(&sb, &tmp)
w.p.elems[w.k][w.idx] = sb.String()
return w.idx
}
func (w *encoder) checkErr(err error) {
if err != nil {
base.Fatalf("unexpected error: %v", err)
}
}
func (w *encoder) rawUvarint(x uint64) {
var buf [binary.MaxVarintLen64]byte
n := binary.PutUvarint(buf[:], x)
_, err := w.data.Write(buf[:n])
w.checkErr(err)
}
func (w *encoder) rawVarint(x int64) {
// Zig-zag encode.
ux := uint64(x) << 1
if x < 0 {
ux = ^ux
}
w.rawUvarint(ux)
}
func (w *encoder) rawReloc(r reloc, idx int) int {
// TODO(mdempsky): Use map for lookup.
for i, rent := range w.relocs {
if rent.kind == r && rent.idx == idx {
return i
}
}
i := len(w.relocs)
w.relocs = append(w.relocs, relocEnt{r, idx})
return i
}
func (w *encoder) sync(m syncMarker) {
if !enableSync {
return
}
// Writing out stack frame string references requires working
// relocations, but writing out the relocations themselves involves
// sync markers. To prevent infinite recursion, we simply trim the
// stack frame for sync markers within the relocation header.
var frames []string
if !w.encodingRelocHeader && base.Debug.SyncFrames > 0 {
pcs := make([]uintptr, base.Debug.SyncFrames)
n := runtime.Callers(2, pcs)
frames = fmtFrames(pcs[:n]...)
}
// TODO(mdempsky): Save space by writing out stack frames as a
// linked list so we can share common stack frames.
w.rawUvarint(uint64(m))
w.rawUvarint(uint64(len(frames)))
for _, frame := range frames {
w.rawUvarint(uint64(w.rawReloc(relocString, w.p.stringIdx(frame))))
}
}
func (w *encoder) bool(b bool) bool {
w.sync(syncBool)
var x byte
if b {
x = 1
}
err := w.data.WriteByte(x)
w.checkErr(err)
return b
}
func (w *encoder) int64(x int64) {
w.sync(syncInt64)
w.rawVarint(x)
}
func (w *encoder) uint64(x uint64) {
w.sync(syncUint64)
w.rawUvarint(x)
}
func (w *encoder) len(x int) { assert(x >= 0); w.uint64(uint64(x)) }
func (w *encoder) int(x int) { w.int64(int64(x)) }
func (w *encoder) uint(x uint) { w.uint64(uint64(x)) }
func (w *encoder) reloc(r reloc, idx int) {
w.sync(syncUseReloc)
w.len(w.rawReloc(r, idx))
}
func (w *encoder) code(c code) {
w.sync(c.marker())
w.len(c.value())
}
func (w *encoder) string(s string) {
w.sync(syncString)
w.reloc(relocString, w.p.stringIdx(s))
}
func (w *encoder) strings(ss []string) {
w.len(len(ss))
for _, s := range ss {
w.string(s)
}
}
func (w *encoder) value(val constant.Value) {
w.sync(syncValue)
if w.bool(val.Kind() == constant.Complex) {
w.scalar(constant.Real(val))
w.scalar(constant.Imag(val))
} else {
w.scalar(val)
}
}
func (w *encoder) scalar(val constant.Value) {
switch v := constant.Val(val).(type) {
default:
panic(fmt.Sprintf("unhandled %v (%v)", val, val.Kind()))
case bool:
w.code(valBool)
w.bool(v)
case string:
w.code(valString)
w.string(v)
case int64:
w.code(valInt64)
w.int64(v)
case *big.Int:
w.code(valBigInt)
w.bigInt(v)
case *big.Rat:
w.code(valBigRat)
w.bigInt(v.Num())
w.bigInt(v.Denom())
case *big.Float:
w.code(valBigFloat)
w.bigFloat(v)
}
}
func (w *encoder) bigInt(v *big.Int) {
b := v.Bytes()
w.string(string(b)) // TODO: More efficient encoding.
w.bool(v.Sign() < 0)
}
func (w *encoder) bigFloat(v *big.Float) {
b := v.Append(nil, 'p', -1)
w.string(string(b)) // TODO: More efficient encoding.
}

View file

@ -114,7 +114,7 @@ func (g *irgen) expr0(typ types2.Type, expr syntax.Expr) ir.Node {
case *syntax.CallExpr: case *syntax.CallExpr:
fun := g.expr(expr.Fun) fun := g.expr(expr.Fun)
return Call(pos, g.typ(typ), fun, g.exprs(expr.ArgList), expr.HasDots) return g.callExpr(pos, g.typ(typ), fun, g.exprs(expr.ArgList), expr.HasDots)
case *syntax.IndexExpr: case *syntax.IndexExpr:
args := unpackListExpr(expr.Index) args := unpackListExpr(expr.Index)
@ -206,6 +206,53 @@ func (g *irgen) substType(typ *types.Type, tparams *types.Type, targs []ir.Node)
return newt return newt
} }
// callExpr creates a call expression (which might be a type conversion, built-in
// call, or a regular call) and does standard transforms, unless we are in a generic
// function.
func (g *irgen) callExpr(pos src.XPos, typ *types.Type, fun ir.Node, args []ir.Node, dots bool) ir.Node {
n := ir.NewCallExpr(pos, ir.OCALL, fun, args)
n.IsDDD = dots
typed(typ, n)
if fun.Op() == ir.OTYPE {
// Actually a type conversion, not a function call.
if !g.delayTransform() {
return transformConvCall(n)
}
return n
}
if fun, ok := fun.(*ir.Name); ok && fun.BuiltinOp != 0 {
if !g.delayTransform() {
return transformBuiltin(n)
}
return n
}
// Add information, now that we know that fun is actually being called.
switch fun := fun.(type) {
case *ir.SelectorExpr:
if fun.Op() == ir.OMETHVALUE {
op := ir.ODOTMETH
if fun.X.Type().IsInterface() {
op = ir.ODOTINTER
}
fun.SetOp(op)
// Set the type to include the receiver, since that's what
// later parts of the compiler expect
fun.SetType(fun.Selection.Type)
}
}
// A function instantiation (even if fully concrete) shouldn't be
// transformed yet, because we need to add the dictionary during the
// transformation.
if fun.Op() != ir.OFUNCINST && !g.delayTransform() {
transformCall(n)
}
return n
}
// selectorExpr resolves the choice of ODOT, ODOTPTR, OMETHVALUE (eventually // selectorExpr resolves the choice of ODOT, ODOTPTR, OMETHVALUE (eventually
// ODOTMETH & ODOTINTER), and OMETHEXPR and deals with embedded fields here rather // ODOTMETH & ODOTINTER), and OMETHEXPR and deals with embedded fields here rather
// than in typecheck.go. // than in typecheck.go.
@ -332,13 +379,13 @@ func (g *irgen) exprs(exprs []syntax.Expr) []ir.Node {
} }
func (g *irgen) compLit(typ types2.Type, lit *syntax.CompositeLit) ir.Node { func (g *irgen) compLit(typ types2.Type, lit *syntax.CompositeLit) ir.Node {
if ptr, ok := types2.StructuralType(typ).(*types2.Pointer); ok { if ptr, ok := types2.CoreType(typ).(*types2.Pointer); ok {
n := ir.NewAddrExpr(g.pos(lit), g.compLit(ptr.Elem(), lit)) n := ir.NewAddrExpr(g.pos(lit), g.compLit(ptr.Elem(), lit))
n.SetOp(ir.OPTRLIT) n.SetOp(ir.OPTRLIT)
return typed(g.typ(typ), n) return typed(g.typ(typ), n)
} }
_, isStruct := types2.StructuralType(typ).(*types2.Struct) _, isStruct := types2.CoreType(typ).(*types2.Struct)
exprs := make([]ir.Node, len(lit.ElemList)) exprs := make([]ir.Node, len(lit.ElemList))
for i, elem := range lit.ElemList { for i, elem := range lit.ElemList {

View file

@ -98,95 +98,6 @@ func Binary(pos src.XPos, op ir.Op, typ *types.Type, x, y ir.Node) *ir.BinaryExp
} }
} }
func Call(pos src.XPos, typ *types.Type, fun ir.Node, args []ir.Node, dots bool) ir.Node {
n := ir.NewCallExpr(pos, ir.OCALL, fun, args)
n.IsDDD = dots
if fun.Op() == ir.OTYPE {
// Actually a type conversion, not a function call.
if !fun.Type().IsInterface() &&
(fun.Type().HasTParam() || args[0].Type().HasTParam()) {
// For type params, we can transform if fun.Type() is known
// to be an interface (in which case a CONVIFACE node will be
// inserted). Otherwise, don't typecheck until we actually
// know the type.
return typed(typ, n)
}
typed(typ, n)
return transformConvCall(n)
}
if fun, ok := fun.(*ir.Name); ok && fun.BuiltinOp != 0 {
// For most Builtin ops, we delay doing transformBuiltin if any of the
// args have type params, for a variety of reasons:
//
// OMAKE: transformMake can't choose specific ops OMAKESLICE, etc.
// until arg type is known
// OREAL/OIMAG: transformRealImag can't determine type float32/float64
// until arg type known
// OAPPEND: transformAppend requires that the arg is a slice
// ODELETE: transformDelete requires that the arg is a map
// OALIGNOF, OSIZEOF: can be eval'ed to a constant until types known.
switch fun.BuiltinOp {
case ir.OMAKE, ir.OREAL, ir.OIMAG, ir.OAPPEND, ir.ODELETE, ir.OALIGNOF, ir.OOFFSETOF, ir.OSIZEOF:
hasTParam := false
for _, arg := range args {
if fun.BuiltinOp == ir.OOFFSETOF {
// It's the type of left operand of the
// selection that matters, not the type of
// the field itself (which is irrelevant for
// offsetof).
arg = arg.(*ir.SelectorExpr).X
}
if arg.Type().HasTParam() {
hasTParam = true
break
}
}
if hasTParam {
return typed(typ, n)
}
}
typed(typ, n)
return transformBuiltin(n)
}
// Add information, now that we know that fun is actually being called.
switch fun := fun.(type) {
case *ir.SelectorExpr:
if fun.Op() == ir.OMETHVALUE {
op := ir.ODOTMETH
if fun.X.Type().IsInterface() {
op = ir.ODOTINTER
}
fun.SetOp(op)
// Set the type to include the receiver, since that's what
// later parts of the compiler expect
fun.SetType(fun.Selection.Type)
}
}
if fun.Type().HasTParam() || fun.Op() == ir.OXDOT || fun.Op() == ir.OFUNCINST {
// If the fun arg is or has a type param, we can't do all the
// transformations, since we may not have needed properties yet
// (e.g. number of return values, etc). The same applies if a fun
// which is an XDOT could not be transformed yet because of a generic
// type in the X of the selector expression.
//
// A function instantiation (even if fully concrete) shouldn't be
// transformed yet, because we need to add the dictionary during the
// transformation.
return typed(typ, n)
}
// If no type params, do the normal call transformations. This
// will convert OCALL to OCALLFUNC.
typed(typ, n)
transformCall(n)
return n
}
func Compare(pos src.XPos, typ *types.Type, op ir.Op, x, y ir.Node) *ir.BinaryExpr { func Compare(pos src.XPos, typ *types.Type, op ir.Op, x, y ir.Node) *ir.BinaryExpr {
n := ir.NewBinaryExpr(pos, op, x, y) n := ir.NewBinaryExpr(pos, op, x, y)
typed(typ, n) typed(typ, n)

View file

@ -11,7 +11,6 @@ import (
"os" "os"
pathpkg "path" pathpkg "path"
"runtime" "runtime"
"sort"
"strconv" "strconv"
"strings" "strings"
"unicode" "unicode"
@ -20,7 +19,6 @@ import (
"cmd/compile/internal/base" "cmd/compile/internal/base"
"cmd/compile/internal/importer" "cmd/compile/internal/importer"
"cmd/compile/internal/ir" "cmd/compile/internal/ir"
"cmd/compile/internal/syntax"
"cmd/compile/internal/typecheck" "cmd/compile/internal/typecheck"
"cmd/compile/internal/types" "cmd/compile/internal/types"
"cmd/compile/internal/types2" "cmd/compile/internal/types2"
@ -28,7 +26,6 @@ import (
"cmd/internal/bio" "cmd/internal/bio"
"cmd/internal/goobj" "cmd/internal/goobj"
"cmd/internal/objabi" "cmd/internal/objabi"
"cmd/internal/src"
) )
// haveLegacyImports records whether we've imported any packages // haveLegacyImports records whether we've imported any packages
@ -141,10 +138,6 @@ func openPackage(path string) (*os.File, error) {
return nil, errors.New("file not found") return nil, errors.New("file not found")
} }
// myheight tracks the local package's height based on packages
// imported so far.
var myheight int
// resolveImportPath resolves an import path as it appears in a Go // resolveImportPath resolves an import path as it appears in a Go
// source file to the package's full path. // source file to the package's full path.
func resolveImportPath(path string) (string, error) { func resolveImportPath(path string) (string, error) {
@ -187,42 +180,6 @@ func resolveImportPath(path string) (string, error) {
return path, nil return path, nil
} }
func importfile(decl *syntax.ImportDecl) *types.Pkg {
path, err := parseImportPath(decl.Path)
if err != nil {
base.Errorf("%s", err)
return nil
}
pkg, _, err := readImportFile(path, typecheck.Target, nil, nil)
if err != nil {
base.Errorf("%s", err)
return nil
}
if pkg != types.UnsafePkg && pkg.Height >= myheight {
myheight = pkg.Height + 1
}
return pkg
}
func parseImportPath(pathLit *syntax.BasicLit) (string, error) {
if pathLit.Kind != syntax.StringLit {
return "", errors.New("import path must be a string")
}
path, err := strconv.Unquote(pathLit.Value)
if err != nil {
return "", errors.New("import path must be a string")
}
if err := checkImportPath(path, false); err != nil {
return "", err
}
return path, err
}
// readImportFile reads the import file for the given package path and // readImportFile reads the import file for the given package path and
// returns its types.Pkg representation. If packages is non-nil, the // returns its types.Pkg representation. If packages is non-nil, the
// types2.Package representation is also returned. // types2.Package representation is also returned.
@ -467,135 +424,3 @@ func checkImportPath(path string, allowSpace bool) error {
return nil return nil
} }
func pkgnotused(lineno src.XPos, path string, name string) {
// If the package was imported with a name other than the final
// import path element, show it explicitly in the error message.
// Note that this handles both renamed imports and imports of
// packages containing unconventional package declarations.
// Note that this uses / always, even on Windows, because Go import
// paths always use forward slashes.
elem := path
if i := strings.LastIndex(elem, "/"); i >= 0 {
elem = elem[i+1:]
}
if name == "" || elem == name {
base.ErrorfAt(lineno, "imported and not used: %q", path)
} else {
base.ErrorfAt(lineno, "imported and not used: %q as %s", path, name)
}
}
func mkpackage(pkgname string) {
if types.LocalPkg.Name == "" {
if pkgname == "_" {
base.Errorf("invalid package name _")
}
types.LocalPkg.Name = pkgname
} else {
if pkgname != types.LocalPkg.Name {
base.Errorf("package %s; expected %s", pkgname, types.LocalPkg.Name)
}
}
}
func clearImports() {
type importedPkg struct {
pos src.XPos
path string
name string
}
var unused []importedPkg
for _, s := range types.LocalPkg.Syms {
n := ir.AsNode(s.Def)
if n == nil {
continue
}
if n.Op() == ir.OPACK {
// throw away top-level package name left over
// from previous file.
// leave s->block set to cause redeclaration
// errors if a conflicting top-level name is
// introduced by a different file.
p := n.(*ir.PkgName)
if !p.Used && base.SyntaxErrors() == 0 {
unused = append(unused, importedPkg{p.Pos(), p.Pkg.Path, s.Name})
}
s.Def = nil
continue
}
if s.Def != nil && s.Def.Sym() != s {
// throw away top-level name left over
// from previous import . "x"
// We'll report errors after type checking in CheckDotImports.
s.Def = nil
continue
}
}
sort.Slice(unused, func(i, j int) bool { return unused[i].pos.Before(unused[j].pos) })
for _, pkg := range unused {
pkgnotused(pkg.pos, pkg.path, pkg.name)
}
}
// CheckDotImports reports errors for any unused dot imports.
func CheckDotImports() {
for _, pack := range dotImports {
if !pack.Used {
base.ErrorfAt(pack.Pos(), "imported and not used: %q", pack.Pkg.Path)
}
}
// No longer needed; release memory.
dotImports = nil
typecheck.DotImportRefs = nil
}
// dotImports tracks all PkgNames that have been dot-imported.
var dotImports []*ir.PkgName
// find all the exported symbols in package referenced by PkgName,
// and make them available in the current package
func importDot(pack *ir.PkgName) {
if typecheck.DotImportRefs == nil {
typecheck.DotImportRefs = make(map[*ir.Ident]*ir.PkgName)
}
opkg := pack.Pkg
for _, s := range opkg.Syms {
if s.Def == nil {
if _, ok := typecheck.DeclImporter[s]; !ok {
continue
}
}
if !types.IsExported(s.Name) || strings.ContainsRune(s.Name, 0xb7) { // 0xb7 = center dot
continue
}
s1 := typecheck.Lookup(s.Name)
if s1.Def != nil {
pkgerror := fmt.Sprintf("during import %q", opkg.Path)
typecheck.Redeclared(base.Pos, s1, pkgerror)
continue
}
id := ir.NewIdent(src.NoXPos, s)
typecheck.DotImportRefs[id] = pack
s1.Def = id
s1.Block = 1
}
dotImports = append(dotImports, pack)
}
// importName is like oldname,
// but it reports an error if sym is from another package and not exported.
func importName(sym *types.Sym) ir.Node {
n := oldname(sym)
if !types.IsExported(sym.Name) && sym.Pkg != types.LocalPkg {
n.SetDiag(true)
base.Errorf("cannot refer to unexported name %s.%s", sym.Pkg.Name, sym.Name)
}
return n
}

View file

@ -6,7 +6,6 @@ package noder
import ( import (
"fmt" "fmt"
"os"
"cmd/compile/internal/base" "cmd/compile/internal/base"
"cmd/compile/internal/dwarfgen" "cmd/compile/internal/dwarfgen"
@ -77,10 +76,6 @@ func checkFiles(noders []*noder) (posMap, *types2.Package, *types2.Info) {
func check2(noders []*noder) { func check2(noders []*noder) {
m, pkg, info := checkFiles(noders) m, pkg, info := checkFiles(noders)
if base.Flag.G < 2 {
os.Exit(0)
}
g := irgen{ g := irgen{
target: typecheck.Target, target: typecheck.Target,
self: pkg, self: pkg,
@ -90,10 +85,6 @@ func check2(noders []*noder) {
typs: make(map[types2.Type]*types.Type), typs: make(map[types2.Type]*types.Type),
} }
g.generate(noders) g.generate(noders)
if base.Flag.G < 3 {
os.Exit(0)
}
} }
// Information about sub-dictionary entries in a dictionary // Information about sub-dictionary entries in a dictionary

View file

@ -7,6 +7,7 @@
package noder package noder
import ( import (
"internal/pkgbits"
"io" "io"
"cmd/compile/internal/base" "cmd/compile/internal/base"
@ -29,26 +30,30 @@ import (
// multiple parts into a cohesive whole"... e.g., "assembler" and // multiple parts into a cohesive whole"... e.g., "assembler" and
// "compiler" are also already taken. // "compiler" are also already taken.
// TODO(mdempsky): Should linker go into pkgbits? Probably the
// low-level linking details can be moved there, but the logic for
// handling extension data needs to stay in the compiler.
type linker struct { type linker struct {
pw pkgEncoder pw pkgbits.PkgEncoder
pkgs map[string]int pkgs map[string]int
decls map[*types.Sym]int decls map[*types.Sym]int
} }
func (l *linker) relocAll(pr *pkgReader, relocs []relocEnt) []relocEnt { func (l *linker) relocAll(pr *pkgReader, relocs []pkgbits.RelocEnt) []pkgbits.RelocEnt {
res := make([]relocEnt, len(relocs)) res := make([]pkgbits.RelocEnt, len(relocs))
for i, rent := range relocs { for i, rent := range relocs {
rent.idx = l.relocIdx(pr, rent.kind, rent.idx) rent.Idx = l.relocIdx(pr, rent.Kind, rent.Idx)
res[i] = rent res[i] = rent
} }
return res return res
} }
func (l *linker) relocIdx(pr *pkgReader, k reloc, idx int) int { func (l *linker) relocIdx(pr *pkgReader, k pkgbits.RelocKind, idx int) int {
assert(pr != nil) assert(pr != nil)
absIdx := pr.absIdx(k, idx) absIdx := pr.AbsIdx(k, idx)
if newidx := pr.newindex[absIdx]; newidx != 0 { if newidx := pr.newindex[absIdx]; newidx != 0 {
return ^newidx return ^newidx
@ -56,11 +61,11 @@ func (l *linker) relocIdx(pr *pkgReader, k reloc, idx int) int {
var newidx int var newidx int
switch k { switch k {
case relocString: case pkgbits.RelocString:
newidx = l.relocString(pr, idx) newidx = l.relocString(pr, idx)
case relocPkg: case pkgbits.RelocPkg:
newidx = l.relocPkg(pr, idx) newidx = l.relocPkg(pr, idx)
case relocObj: case pkgbits.RelocObj:
newidx = l.relocObj(pr, idx) newidx = l.relocObj(pr, idx)
default: default:
@ -70,9 +75,9 @@ func (l *linker) relocIdx(pr *pkgReader, k reloc, idx int) int {
// every section could be deduplicated. This would also be easier // every section could be deduplicated. This would also be easier
// if we do external relocations. // if we do external relocations.
w := l.pw.newEncoderRaw(k) w := l.pw.NewEncoderRaw(k)
l.relocCommon(pr, &w, k, idx) l.relocCommon(pr, &w, k, idx)
newidx = w.idx newidx = w.Idx
} }
pr.newindex[absIdx] = ^newidx pr.newindex[absIdx] = ^newidx
@ -81,43 +86,43 @@ func (l *linker) relocIdx(pr *pkgReader, k reloc, idx int) int {
} }
func (l *linker) relocString(pr *pkgReader, idx int) int { func (l *linker) relocString(pr *pkgReader, idx int) int {
return l.pw.stringIdx(pr.stringIdx(idx)) return l.pw.StringIdx(pr.StringIdx(idx))
} }
func (l *linker) relocPkg(pr *pkgReader, idx int) int { func (l *linker) relocPkg(pr *pkgReader, idx int) int {
path := pr.peekPkgPath(idx) path := pr.PeekPkgPath(idx)
if newidx, ok := l.pkgs[path]; ok { if newidx, ok := l.pkgs[path]; ok {
return newidx return newidx
} }
r := pr.newDecoder(relocPkg, idx, syncPkgDef) r := pr.NewDecoder(pkgbits.RelocPkg, idx, pkgbits.SyncPkgDef)
w := l.pw.newEncoder(relocPkg, syncPkgDef) w := l.pw.NewEncoder(pkgbits.RelocPkg, pkgbits.SyncPkgDef)
l.pkgs[path] = w.idx l.pkgs[path] = w.Idx
// TODO(mdempsky): We end up leaving an empty string reference here // TODO(mdempsky): We end up leaving an empty string reference here
// from when the package was originally written as "". Probably not // from when the package was originally written as "". Probably not
// a big deal, but a little annoying. Maybe relocating // a big deal, but a little annoying. Maybe relocating
// cross-references in place is the way to go after all. // cross-references in place is the way to go after all.
w.relocs = l.relocAll(pr, r.relocs) w.Relocs = l.relocAll(pr, r.Relocs)
_ = r.string() // original path _ = r.String() // original path
w.string(path) w.String(path)
io.Copy(&w.data, &r.data) io.Copy(&w.Data, &r.Data)
return w.flush() return w.Flush()
} }
func (l *linker) relocObj(pr *pkgReader, idx int) int { func (l *linker) relocObj(pr *pkgReader, idx int) int {
path, name, tag := pr.peekObj(idx) path, name, tag := pr.PeekObj(idx)
sym := types.NewPkg(path, "").Lookup(name) sym := types.NewPkg(path, "").Lookup(name)
if newidx, ok := l.decls[sym]; ok { if newidx, ok := l.decls[sym]; ok {
return newidx return newidx
} }
if tag == objStub && path != "builtin" && path != "unsafe" { if tag == pkgbits.ObjStub && path != "builtin" && path != "unsafe" {
pri, ok := objReader[sym] pri, ok := objReader[sym]
if !ok { if !ok {
base.Fatalf("missing reader for %q.%v", path, name) base.Fatalf("missing reader for %q.%v", path, name)
@ -127,25 +132,25 @@ func (l *linker) relocObj(pr *pkgReader, idx int) int {
pr = pri.pr pr = pri.pr
idx = pri.idx idx = pri.idx
path2, name2, tag2 := pr.peekObj(idx) path2, name2, tag2 := pr.PeekObj(idx)
sym2 := types.NewPkg(path2, "").Lookup(name2) sym2 := types.NewPkg(path2, "").Lookup(name2)
assert(sym == sym2) assert(sym == sym2)
assert(tag2 != objStub) assert(tag2 != pkgbits.ObjStub)
} }
w := l.pw.newEncoderRaw(relocObj) w := l.pw.NewEncoderRaw(pkgbits.RelocObj)
wext := l.pw.newEncoderRaw(relocObjExt) wext := l.pw.NewEncoderRaw(pkgbits.RelocObjExt)
wname := l.pw.newEncoderRaw(relocName) wname := l.pw.NewEncoderRaw(pkgbits.RelocName)
wdict := l.pw.newEncoderRaw(relocObjDict) wdict := l.pw.NewEncoderRaw(pkgbits.RelocObjDict)
l.decls[sym] = w.idx l.decls[sym] = w.Idx
assert(wext.idx == w.idx) assert(wext.Idx == w.Idx)
assert(wname.idx == w.idx) assert(wname.Idx == w.Idx)
assert(wdict.idx == w.idx) assert(wdict.Idx == w.Idx)
l.relocCommon(pr, &w, relocObj, idx) l.relocCommon(pr, &w, pkgbits.RelocObj, idx)
l.relocCommon(pr, &wname, relocName, idx) l.relocCommon(pr, &wname, pkgbits.RelocName, idx)
l.relocCommon(pr, &wdict, relocObjDict, idx) l.relocCommon(pr, &wdict, pkgbits.RelocObjDict, idx)
var obj *ir.Name var obj *ir.Name
if path == "" { if path == "" {
@ -162,70 +167,70 @@ func (l *linker) relocObj(pr *pkgReader, idx int) int {
} }
if obj != nil { if obj != nil {
wext.sync(syncObject1) wext.Sync(pkgbits.SyncObject1)
switch tag { switch tag {
case objFunc: case pkgbits.ObjFunc:
l.relocFuncExt(&wext, obj) l.relocFuncExt(&wext, obj)
case objType: case pkgbits.ObjType:
l.relocTypeExt(&wext, obj) l.relocTypeExt(&wext, obj)
case objVar: case pkgbits.ObjVar:
l.relocVarExt(&wext, obj) l.relocVarExt(&wext, obj)
} }
wext.flush() wext.Flush()
} else { } else {
l.relocCommon(pr, &wext, relocObjExt, idx) l.relocCommon(pr, &wext, pkgbits.RelocObjExt, idx)
} }
return w.idx return w.Idx
} }
func (l *linker) relocCommon(pr *pkgReader, w *encoder, k reloc, idx int) { func (l *linker) relocCommon(pr *pkgReader, w *pkgbits.Encoder, k pkgbits.RelocKind, idx int) {
r := pr.newDecoderRaw(k, idx) r := pr.NewDecoderRaw(k, idx)
w.relocs = l.relocAll(pr, r.relocs) w.Relocs = l.relocAll(pr, r.Relocs)
io.Copy(&w.data, &r.data) io.Copy(&w.Data, &r.Data)
w.flush() w.Flush()
} }
func (l *linker) pragmaFlag(w *encoder, pragma ir.PragmaFlag) { func (l *linker) pragmaFlag(w *pkgbits.Encoder, pragma ir.PragmaFlag) {
w.sync(syncPragma) w.Sync(pkgbits.SyncPragma)
w.int(int(pragma)) w.Int(int(pragma))
} }
func (l *linker) relocFuncExt(w *encoder, name *ir.Name) { func (l *linker) relocFuncExt(w *pkgbits.Encoder, name *ir.Name) {
w.sync(syncFuncExt) w.Sync(pkgbits.SyncFuncExt)
l.pragmaFlag(w, name.Func.Pragma) l.pragmaFlag(w, name.Func.Pragma)
l.linkname(w, name) l.linkname(w, name)
// Relocated extension data. // Relocated extension data.
w.bool(true) w.Bool(true)
// Record definition ABI so cross-ABI calls can be direct. // Record definition ABI so cross-ABI calls can be direct.
// This is important for the performance of calling some // This is important for the performance of calling some
// common functions implemented in assembly (e.g., bytealg). // common functions implemented in assembly (e.g., bytealg).
w.uint64(uint64(name.Func.ABI)) w.Uint64(uint64(name.Func.ABI))
// Escape analysis. // Escape analysis.
for _, fs := range &types.RecvsParams { for _, fs := range &types.RecvsParams {
for _, f := range fs(name.Type()).FieldSlice() { for _, f := range fs(name.Type()).FieldSlice() {
w.string(f.Note) w.String(f.Note)
} }
} }
if inl := name.Func.Inl; w.bool(inl != nil) { if inl := name.Func.Inl; w.Bool(inl != nil) {
w.len(int(inl.Cost)) w.Len(int(inl.Cost))
w.bool(inl.CanDelayResults) w.Bool(inl.CanDelayResults)
pri, ok := bodyReader[name.Func] pri, ok := bodyReader[name.Func]
assert(ok) assert(ok)
w.reloc(relocBody, l.relocIdx(pri.pr, relocBody, pri.idx)) w.Reloc(pkgbits.RelocBody, l.relocIdx(pri.pr, pkgbits.RelocBody, pri.idx))
} }
w.sync(syncEOF) w.Sync(pkgbits.SyncEOF)
} }
func (l *linker) relocTypeExt(w *encoder, name *ir.Name) { func (l *linker) relocTypeExt(w *pkgbits.Encoder, name *ir.Name) {
w.sync(syncTypeExt) w.Sync(pkgbits.SyncTypeExt)
typ := name.Type() typ := name.Type()
@ -242,55 +247,28 @@ func (l *linker) relocTypeExt(w *encoder, name *ir.Name) {
} }
} }
func (l *linker) relocVarExt(w *encoder, name *ir.Name) { func (l *linker) relocVarExt(w *pkgbits.Encoder, name *ir.Name) {
w.sync(syncVarExt) w.Sync(pkgbits.SyncVarExt)
l.linkname(w, name) l.linkname(w, name)
} }
func (l *linker) linkname(w *encoder, name *ir.Name) { func (l *linker) linkname(w *pkgbits.Encoder, name *ir.Name) {
w.sync(syncLinkname) w.Sync(pkgbits.SyncLinkname)
linkname := name.Sym().Linkname linkname := name.Sym().Linkname
if !l.lsymIdx(w, linkname, name.Linksym()) { if !l.lsymIdx(w, linkname, name.Linksym()) {
w.string(linkname) w.String(linkname)
} }
} }
func (l *linker) lsymIdx(w *encoder, linkname string, lsym *obj.LSym) bool { func (l *linker) lsymIdx(w *pkgbits.Encoder, linkname string, lsym *obj.LSym) bool {
if lsym.PkgIdx > goobj.PkgIdxSelf || (lsym.PkgIdx == goobj.PkgIdxInvalid && !lsym.Indexed()) || linkname != "" { if lsym.PkgIdx > goobj.PkgIdxSelf || (lsym.PkgIdx == goobj.PkgIdxInvalid && !lsym.Indexed()) || linkname != "" {
w.int64(-1) w.Int64(-1)
return false return false
} }
// For a defined symbol, export its index. // For a defined symbol, export its index.
// For re-exporting an imported symbol, pass its index through. // For re-exporting an imported symbol, pass its index through.
w.int64(int64(lsym.SymIdx)) w.Int64(int64(lsym.SymIdx))
return true return true
} }
// @@@ Helpers
// TODO(mdempsky): These should probably be removed. I think they're a
// smell that the export data format is not yet quite right.
func (pr *pkgDecoder) peekPkgPath(idx int) string {
r := pr.newDecoder(relocPkg, idx, syncPkgDef)
path := r.string()
if path == "" {
path = pr.pkgPath
}
return path
}
func (pr *pkgDecoder) peekObj(idx int) (string, string, codeObj) {
r := pr.newDecoder(relocName, idx, syncObject1)
r.sync(syncSym)
r.sync(syncPkg)
path := pr.peekPkgPath(r.reloc(relocPkg))
name := r.string()
assert(name != "")
tag := codeObj(r.code(syncCodeObj))
return path, name, tag
}

File diff suppressed because it is too large Load diff

View file

@ -9,254 +9,13 @@ package noder
import ( import (
"fmt" "fmt"
"cmd/compile/internal/base"
"cmd/compile/internal/ir"
"cmd/compile/internal/syntax" "cmd/compile/internal/syntax"
"cmd/compile/internal/types2"
"cmd/internal/src"
) )
// This file defines helper functions useful for satisfying toolstash // This file defines helper functions useful for satisfying toolstash
// -cmp when compared against the legacy frontend behavior, but can be // -cmp when compared against the legacy frontend behavior, but can be
// removed after that's no longer a concern. // removed after that's no longer a concern.
// quirksMode controls whether behavior specific to satisfying
// toolstash -cmp is used.
func quirksMode() bool {
return base.Debug.UnifiedQuirks != 0
}
// posBasesOf returns all of the position bases in the source files,
// as seen in a straightforward traversal.
//
// This is necessary to ensure position bases (and thus file names)
// get registered in the same order as noder would visit them.
func posBasesOf(noders []*noder) []*syntax.PosBase {
seen := make(map[*syntax.PosBase]bool)
var bases []*syntax.PosBase
for _, p := range noders {
syntax.Crawl(p.file, func(n syntax.Node) bool {
if b := n.Pos().Base(); !seen[b] {
bases = append(bases, b)
seen[b] = true
}
return false
})
}
return bases
}
// importedObjsOf returns the imported objects (i.e., referenced
// objects not declared by curpkg) from the parsed source files, in
// the order that typecheck used to load their definitions.
//
// This is needed because loading the definitions for imported objects
// can also add file names.
func importedObjsOf(curpkg *types2.Package, info *types2.Info, noders []*noder) []types2.Object {
// This code is complex because it matches the precise order that
// typecheck recursively and repeatedly traverses the IR. It's meant
// to be thrown away eventually anyway.
seen := make(map[types2.Object]bool)
var objs []types2.Object
var phase int
decls := make(map[types2.Object]syntax.Decl)
assoc := func(decl syntax.Decl, names ...*syntax.Name) {
for _, name := range names {
obj, ok := info.Defs[name]
assert(ok)
decls[obj] = decl
}
}
for _, p := range noders {
syntax.Crawl(p.file, func(n syntax.Node) bool {
switch n := n.(type) {
case *syntax.ConstDecl:
assoc(n, n.NameList...)
case *syntax.FuncDecl:
assoc(n, n.Name)
case *syntax.TypeDecl:
assoc(n, n.Name)
case *syntax.VarDecl:
assoc(n, n.NameList...)
case *syntax.BlockStmt:
return true
}
return false
})
}
var visited map[syntax.Decl]bool
var resolveDecl func(n syntax.Decl)
var resolveNode func(n syntax.Node, top bool)
resolveDecl = func(n syntax.Decl) {
if visited[n] {
return
}
visited[n] = true
switch n := n.(type) {
case *syntax.ConstDecl:
resolveNode(n.Type, true)
resolveNode(n.Values, true)
case *syntax.FuncDecl:
if n.Recv != nil {
resolveNode(n.Recv, true)
}
resolveNode(n.Type, true)
case *syntax.TypeDecl:
resolveNode(n.Type, true)
case *syntax.VarDecl:
if n.Type != nil {
resolveNode(n.Type, true)
} else {
resolveNode(n.Values, true)
}
}
}
resolveObj := func(pos syntax.Pos, obj types2.Object) {
switch obj.Pkg() {
case nil:
// builtin; nothing to do
case curpkg:
if decl, ok := decls[obj]; ok {
resolveDecl(decl)
}
default:
if obj.Parent() == obj.Pkg().Scope() && !seen[obj] {
seen[obj] = true
objs = append(objs, obj)
}
}
}
checkdefat := func(pos syntax.Pos, n *syntax.Name) {
if n.Value == "_" {
return
}
obj, ok := info.Uses[n]
if !ok {
obj, ok = info.Defs[n]
if !ok {
return
}
}
if obj == nil {
return
}
resolveObj(pos, obj)
}
checkdef := func(n *syntax.Name) { checkdefat(n.Pos(), n) }
var later []syntax.Node
resolveNode = func(n syntax.Node, top bool) {
if n == nil {
return
}
syntax.Crawl(n, func(n syntax.Node) bool {
switch n := n.(type) {
case *syntax.Name:
checkdef(n)
case *syntax.SelectorExpr:
if name, ok := n.X.(*syntax.Name); ok {
if _, isPkg := info.Uses[name].(*types2.PkgName); isPkg {
checkdefat(n.X.Pos(), n.Sel)
return true
}
}
case *syntax.AssignStmt:
resolveNode(n.Rhs, top)
resolveNode(n.Lhs, top)
return true
case *syntax.VarDecl:
resolveNode(n.Values, top)
case *syntax.FuncLit:
if top {
resolveNode(n.Type, top)
later = append(later, n.Body)
return true
}
case *syntax.BlockStmt:
if phase >= 3 {
for _, stmt := range n.List {
resolveNode(stmt, false)
}
}
return true
}
return false
})
}
for phase = 1; phase <= 5; phase++ {
visited = map[syntax.Decl]bool{}
for _, p := range noders {
for _, decl := range p.file.DeclList {
switch decl := decl.(type) {
case *syntax.ConstDecl:
resolveDecl(decl)
case *syntax.FuncDecl:
resolveDecl(decl)
if phase >= 3 && decl.Body != nil {
resolveNode(decl.Body, true)
}
case *syntax.TypeDecl:
if !decl.Alias || phase >= 2 {
resolveDecl(decl)
}
case *syntax.VarDecl:
if phase >= 2 {
resolveNode(decl.Values, true)
resolveDecl(decl)
}
}
}
if phase >= 5 {
syntax.Crawl(p.file, func(n syntax.Node) bool {
if name, ok := n.(*syntax.Name); ok {
if obj, ok := info.Uses[name]; ok {
resolveObj(name.Pos(), obj)
}
}
return false
})
}
}
for i := 0; i < len(later); i++ {
resolveNode(later[i], true)
}
later = nil
}
return objs
}
// typeExprEndPos returns the position that noder would leave base.Pos // typeExprEndPos returns the position that noder would leave base.Pos
// after parsing the given type expression. // after parsing the given type expression.
func typeExprEndPos(expr0 syntax.Expr) syntax.Pos { func typeExprEndPos(expr0 syntax.Expr) syntax.Pos {
@ -320,131 +79,3 @@ func lastFieldType(fields []*syntax.Field) syntax.Expr {
} }
return fields[len(fields)-1].Type return fields[len(fields)-1].Type
} }
// sumPos returns the position that noder.sum would produce for
// constant expression x.
func sumPos(x syntax.Expr) syntax.Pos {
orig := x
for {
switch x1 := x.(type) {
case *syntax.BasicLit:
assert(x1.Kind == syntax.StringLit)
return x1.Pos()
case *syntax.Operation:
assert(x1.Op == syntax.Add && x1.Y != nil)
if r, ok := x1.Y.(*syntax.BasicLit); ok {
assert(r.Kind == syntax.StringLit)
x = x1.X
continue
}
}
return orig.Pos()
}
}
// funcParamsEndPos returns the value of base.Pos left by noder after
// processing a function signature.
func funcParamsEndPos(fn *ir.Func) src.XPos {
sig := fn.Nname.Type()
fields := sig.Results().FieldSlice()
if len(fields) == 0 {
fields = sig.Params().FieldSlice()
if len(fields) == 0 {
fields = sig.Recvs().FieldSlice()
if len(fields) == 0 {
if fn.OClosure != nil {
return fn.Nname.Ntype.Pos()
}
return fn.Pos()
}
}
}
return fields[len(fields)-1].Pos
}
type dupTypes struct {
origs map[types2.Type]types2.Type
}
func (d *dupTypes) orig(t types2.Type) types2.Type {
if orig, ok := d.origs[t]; ok {
return orig
}
return t
}
func (d *dupTypes) add(t, orig types2.Type) {
if t == orig {
return
}
if d.origs == nil {
d.origs = make(map[types2.Type]types2.Type)
}
assert(d.origs[t] == nil)
d.origs[t] = orig
switch t := t.(type) {
case *types2.Pointer:
orig := orig.(*types2.Pointer)
d.add(t.Elem(), orig.Elem())
case *types2.Slice:
orig := orig.(*types2.Slice)
d.add(t.Elem(), orig.Elem())
case *types2.Map:
orig := orig.(*types2.Map)
d.add(t.Key(), orig.Key())
d.add(t.Elem(), orig.Elem())
case *types2.Array:
orig := orig.(*types2.Array)
assert(t.Len() == orig.Len())
d.add(t.Elem(), orig.Elem())
case *types2.Chan:
orig := orig.(*types2.Chan)
assert(t.Dir() == orig.Dir())
d.add(t.Elem(), orig.Elem())
case *types2.Struct:
orig := orig.(*types2.Struct)
assert(t.NumFields() == orig.NumFields())
for i := 0; i < t.NumFields(); i++ {
d.add(t.Field(i).Type(), orig.Field(i).Type())
}
case *types2.Interface:
orig := orig.(*types2.Interface)
assert(t.NumExplicitMethods() == orig.NumExplicitMethods())
assert(t.NumEmbeddeds() == orig.NumEmbeddeds())
for i := 0; i < t.NumExplicitMethods(); i++ {
d.add(t.ExplicitMethod(i).Type(), orig.ExplicitMethod(i).Type())
}
for i := 0; i < t.NumEmbeddeds(); i++ {
d.add(t.EmbeddedType(i), orig.EmbeddedType(i))
}
case *types2.Signature:
orig := orig.(*types2.Signature)
assert((t.Recv() == nil) == (orig.Recv() == nil))
if t.Recv() != nil {
d.add(t.Recv().Type(), orig.Recv().Type())
}
d.add(t.Params(), orig.Params())
d.add(t.Results(), orig.Results())
case *types2.Tuple:
orig := orig.(*types2.Tuple)
assert(t.Len() == orig.Len())
for i := 0; i < t.Len(); i++ {
d.add(t.At(i).Type(), orig.At(i).Type())
}
default:
assert(types2.Identical(t, orig))
}
}

File diff suppressed because it is too large Load diff

View file

@ -410,7 +410,8 @@ func (g *genInst) buildClosure(outer *ir.Func, x ir.Node) ir.Node {
fn, formalParams, formalResults := startClosure(pos, outer, typ) fn, formalParams, formalResults := startClosure(pos, outer, typ)
// This is the dictionary we want to use. // This is the dictionary we want to use.
// It may be a constant, or it may be a dictionary acquired from the outer function's dictionary. // It may be a constant, it may be the outer functions's dictionary, or it may be
// a subdictionary acquired from the outer function's dictionary.
// For the latter, dictVar is a variable in the outer function's scope, set to the subdictionary // For the latter, dictVar is a variable in the outer function's scope, set to the subdictionary
// read from the outer function's dictionary. // read from the outer function's dictionary.
var dictVar *ir.Name var dictVar *ir.Name
@ -640,6 +641,11 @@ func (g *genInst) getInstantiation(nameNode *ir.Name, shapes []*types.Type, isMe
// over any pointer) // over any pointer)
recvType := nameNode.Type().Recv().Type recvType := nameNode.Type().Recv().Type
recvType = deref(recvType) recvType = deref(recvType)
if recvType.IsFullyInstantiated() {
// Get the type of the base generic type, so we get
// its original typeparams.
recvType = recvType.OrigSym().Def.(*ir.Name).Type()
}
tparams = recvType.RParams() tparams = recvType.RParams()
} else { } else {
fields := nameNode.Type().TParams().Fields().Slice() fields := nameNode.Type().TParams().Fields().Slice()
@ -656,11 +662,9 @@ func (g *genInst) getInstantiation(nameNode *ir.Name, shapes []*types.Type, isMe
s1 := make([]*types.Type, len(shapes)) s1 := make([]*types.Type, len(shapes))
for i, t := range shapes { for i, t := range shapes {
var tparam *types.Type var tparam *types.Type
if tparams[i].Kind() == types.TTYPEPARAM {
// Shapes are grouped differently for structural types, so we // Shapes are grouped differently for structural types, so we
// pass the type param to Shapify(), so we can distinguish. // pass the type param to Shapify(), so we can distinguish.
tparam = tparams[i] tparam = tparams[i]
}
if !t.IsShape() { if !t.IsShape() {
s1[i] = typecheck.Shapify(t, i, tparam) s1[i] = typecheck.Shapify(t, i, tparam)
} else { } else {
@ -1055,8 +1059,6 @@ func (subst *subster) node(n ir.Node) ir.Node {
// Transform the conversion, now that we know the // Transform the conversion, now that we know the
// type argument. // type argument.
m = transformConvCall(call) m = transformConvCall(call)
// CONVIFACE transformation was already done in noder2
assert(m.Op() != ir.OCONVIFACE)
case ir.OMETHVALUE, ir.OMETHEXPR: case ir.OMETHVALUE, ir.OMETHEXPR:
// Redo the transformation of OXDOT, now that we // Redo the transformation of OXDOT, now that we
@ -1076,14 +1078,7 @@ func (subst *subster) node(n ir.Node) ir.Node {
case ir.ONAME: case ir.ONAME:
name := call.X.Name() name := call.X.Name()
if name.BuiltinOp != ir.OXXX { if name.BuiltinOp != ir.OXXX {
switch name.BuiltinOp {
case ir.OMAKE, ir.OREAL, ir.OIMAG, ir.OAPPEND, ir.ODELETE, ir.OALIGNOF, ir.OOFFSETOF, ir.OSIZEOF:
// Transform these builtins now that we
// know the type of the args.
m = transformBuiltin(call) m = transformBuiltin(call)
default:
base.FatalfAt(call.Pos(), "Unexpected builtin op")
}
} else { } else {
// This is the case of a function value that was a // This is the case of a function value that was a
// type parameter (implied to be a function via a // type parameter (implied to be a function via a
@ -1154,6 +1149,7 @@ func (subst *subster) node(n ir.Node) ir.Node {
newfn.Dcl = append(newfn.Dcl, ldict) newfn.Dcl = append(newfn.Dcl, ldict)
as := ir.NewAssignStmt(x.Pos(), ldict, cdict) as := ir.NewAssignStmt(x.Pos(), ldict, cdict)
as.SetTypecheck(1) as.SetTypecheck(1)
ldict.Defn = as
newfn.Body.Append(as) newfn.Body.Append(as)
// Create inst info for the instantiated closure. The dict // Create inst info for the instantiated closure. The dict

View file

@ -1,187 +0,0 @@
// UNREVIEWED
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package noder
import (
"fmt"
"strings"
)
// enableSync controls whether sync markers are written into unified
// IR's export data format and also whether they're expected when
// reading them back in. They're inessential to the correct
// functioning of unified IR, but are helpful during development to
// detect mistakes.
//
// When sync is enabled, writer stack frames will also be included in
// the export data. Currently, a fixed number of frames are included,
// controlled by -d=syncframes (default 0).
const enableSync = true
// fmtFrames formats a backtrace for reporting reader/writer desyncs.
func fmtFrames(pcs ...uintptr) []string {
res := make([]string, 0, len(pcs))
walkFrames(pcs, func(file string, line int, name string, offset uintptr) {
// Trim package from function name. It's just redundant noise.
name = strings.TrimPrefix(name, "cmd/compile/internal/noder.")
res = append(res, fmt.Sprintf("%s:%v: %s +0x%v", file, line, name, offset))
})
return res
}
type frameVisitor func(file string, line int, name string, offset uintptr)
// syncMarker is an enum type that represents markers that may be
// written to export data to ensure the reader and writer stay
// synchronized.
type syncMarker int
//go:generate stringer -type=syncMarker -trimprefix=sync
// TODO(mdempsky): Cleanup unneeded sync markers.
// TODO(mdempsky): Split these markers into public/stable markers, and
// private ones. Also, trim unused ones.
const (
_ syncMarker = iota
syncNode
syncBool
syncInt64
syncUint64
syncString
syncPos
syncPkg
syncSym
syncSelector
syncKind
syncType
syncTypePkg
syncSignature
syncParam
syncOp
syncObject
syncExpr
syncStmt
syncDecl
syncConstDecl
syncFuncDecl
syncTypeDecl
syncVarDecl
syncPragma
syncValue
syncEOF
syncMethod
syncFuncBody
syncUse
syncUseObj
syncObjectIdx
syncTypeIdx
syncBOF
syncEntry
syncOpenScope
syncCloseScope
syncGlobal
syncLocal
syncDefine
syncDefLocal
syncUseLocal
syncDefGlobal
syncUseGlobal
syncTypeParams
syncUseLabel
syncDefLabel
syncFuncLit
syncCommonFunc
syncBodyRef
syncLinksymExt
syncHack
syncSetlineno
syncName
syncImportDecl
syncDeclNames
syncDeclName
syncExprList
syncExprs
syncWrapname
syncTypeExpr
syncTypeExprOrNil
syncChanDir
syncParams
syncCloseAnotherScope
syncSum
syncUnOp
syncBinOp
syncStructType
syncInterfaceType
syncPackname
syncEmbedded
syncStmts
syncStmtsFall
syncStmtFall
syncBlockStmt
syncIfStmt
syncForStmt
syncSwitchStmt
syncRangeStmt
syncCaseClause
syncCommClause
syncSelectStmt
syncDecls
syncLabeledStmt
syncCompLit
sync1
sync2
sync3
sync4
syncN
syncDefImplicit
syncUseName
syncUseObjLocal
syncAddLocal
syncBothSignature
syncSetUnderlying
syncLinkname
syncStmt1
syncStmtsEnd
syncDeclare
syncTopDecls
syncTopConstDecl
syncTopFuncDecl
syncTopTypeDecl
syncTopVarDecl
syncObject1
syncAddBody
syncLabel
syncFuncExt
syncMethExt
syncOptLabel
syncScalar
syncStmtDecls
syncDeclLocal
syncObjLocal
syncObjLocal1
syncDeclareLocal
syncPublic
syncPrivate
syncRelocs
syncReloc
syncUseReloc
syncVarExt
syncPkgDef
syncTypeExt
syncVal
syncCodeObj
syncPosBase
syncLocalIdent
syncTypeParamNames
syncTypeParamBounds
syncImplicitTypes
syncObjectName
)

View file

@ -1,156 +0,0 @@
// Code generated by "stringer -type=syncMarker -trimprefix=sync"; DO NOT EDIT.
package noder
import "strconv"
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[syncNode-1]
_ = x[syncBool-2]
_ = x[syncInt64-3]
_ = x[syncUint64-4]
_ = x[syncString-5]
_ = x[syncPos-6]
_ = x[syncPkg-7]
_ = x[syncSym-8]
_ = x[syncSelector-9]
_ = x[syncKind-10]
_ = x[syncType-11]
_ = x[syncTypePkg-12]
_ = x[syncSignature-13]
_ = x[syncParam-14]
_ = x[syncOp-15]
_ = x[syncObject-16]
_ = x[syncExpr-17]
_ = x[syncStmt-18]
_ = x[syncDecl-19]
_ = x[syncConstDecl-20]
_ = x[syncFuncDecl-21]
_ = x[syncTypeDecl-22]
_ = x[syncVarDecl-23]
_ = x[syncPragma-24]
_ = x[syncValue-25]
_ = x[syncEOF-26]
_ = x[syncMethod-27]
_ = x[syncFuncBody-28]
_ = x[syncUse-29]
_ = x[syncUseObj-30]
_ = x[syncObjectIdx-31]
_ = x[syncTypeIdx-32]
_ = x[syncBOF-33]
_ = x[syncEntry-34]
_ = x[syncOpenScope-35]
_ = x[syncCloseScope-36]
_ = x[syncGlobal-37]
_ = x[syncLocal-38]
_ = x[syncDefine-39]
_ = x[syncDefLocal-40]
_ = x[syncUseLocal-41]
_ = x[syncDefGlobal-42]
_ = x[syncUseGlobal-43]
_ = x[syncTypeParams-44]
_ = x[syncUseLabel-45]
_ = x[syncDefLabel-46]
_ = x[syncFuncLit-47]
_ = x[syncCommonFunc-48]
_ = x[syncBodyRef-49]
_ = x[syncLinksymExt-50]
_ = x[syncHack-51]
_ = x[syncSetlineno-52]
_ = x[syncName-53]
_ = x[syncImportDecl-54]
_ = x[syncDeclNames-55]
_ = x[syncDeclName-56]
_ = x[syncExprList-57]
_ = x[syncExprs-58]
_ = x[syncWrapname-59]
_ = x[syncTypeExpr-60]
_ = x[syncTypeExprOrNil-61]
_ = x[syncChanDir-62]
_ = x[syncParams-63]
_ = x[syncCloseAnotherScope-64]
_ = x[syncSum-65]
_ = x[syncUnOp-66]
_ = x[syncBinOp-67]
_ = x[syncStructType-68]
_ = x[syncInterfaceType-69]
_ = x[syncPackname-70]
_ = x[syncEmbedded-71]
_ = x[syncStmts-72]
_ = x[syncStmtsFall-73]
_ = x[syncStmtFall-74]
_ = x[syncBlockStmt-75]
_ = x[syncIfStmt-76]
_ = x[syncForStmt-77]
_ = x[syncSwitchStmt-78]
_ = x[syncRangeStmt-79]
_ = x[syncCaseClause-80]
_ = x[syncCommClause-81]
_ = x[syncSelectStmt-82]
_ = x[syncDecls-83]
_ = x[syncLabeledStmt-84]
_ = x[syncCompLit-85]
_ = x[sync1-86]
_ = x[sync2-87]
_ = x[sync3-88]
_ = x[sync4-89]
_ = x[syncN-90]
_ = x[syncDefImplicit-91]
_ = x[syncUseName-92]
_ = x[syncUseObjLocal-93]
_ = x[syncAddLocal-94]
_ = x[syncBothSignature-95]
_ = x[syncSetUnderlying-96]
_ = x[syncLinkname-97]
_ = x[syncStmt1-98]
_ = x[syncStmtsEnd-99]
_ = x[syncDeclare-100]
_ = x[syncTopDecls-101]
_ = x[syncTopConstDecl-102]
_ = x[syncTopFuncDecl-103]
_ = x[syncTopTypeDecl-104]
_ = x[syncTopVarDecl-105]
_ = x[syncObject1-106]
_ = x[syncAddBody-107]
_ = x[syncLabel-108]
_ = x[syncFuncExt-109]
_ = x[syncMethExt-110]
_ = x[syncOptLabel-111]
_ = x[syncScalar-112]
_ = x[syncStmtDecls-113]
_ = x[syncDeclLocal-114]
_ = x[syncObjLocal-115]
_ = x[syncObjLocal1-116]
_ = x[syncDeclareLocal-117]
_ = x[syncPublic-118]
_ = x[syncPrivate-119]
_ = x[syncRelocs-120]
_ = x[syncReloc-121]
_ = x[syncUseReloc-122]
_ = x[syncVarExt-123]
_ = x[syncPkgDef-124]
_ = x[syncTypeExt-125]
_ = x[syncVal-126]
_ = x[syncCodeObj-127]
_ = x[syncPosBase-128]
_ = x[syncLocalIdent-129]
_ = x[syncTypeParamNames-130]
_ = x[syncTypeParamBounds-131]
_ = x[syncImplicitTypes-132]
_ = x[syncObjectName-133]
}
const _syncMarker_name = "NodeBoolInt64Uint64StringPosPkgSymSelectorKindTypeTypePkgSignatureParamOpObjectExprStmtDeclConstDeclFuncDeclTypeDeclVarDeclPragmaValueEOFMethodFuncBodyUseUseObjObjectIdxTypeIdxBOFEntryOpenScopeCloseScopeGlobalLocalDefineDefLocalUseLocalDefGlobalUseGlobalTypeParamsUseLabelDefLabelFuncLitCommonFuncBodyRefLinksymExtHackSetlinenoNameImportDeclDeclNamesDeclNameExprListExprsWrapnameTypeExprTypeExprOrNilChanDirParamsCloseAnotherScopeSumUnOpBinOpStructTypeInterfaceTypePacknameEmbeddedStmtsStmtsFallStmtFallBlockStmtIfStmtForStmtSwitchStmtRangeStmtCaseClauseCommClauseSelectStmtDeclsLabeledStmtCompLit1234NDefImplicitUseNameUseObjLocalAddLocalBothSignatureSetUnderlyingLinknameStmt1StmtsEndDeclareTopDeclsTopConstDeclTopFuncDeclTopTypeDeclTopVarDeclObject1AddBodyLabelFuncExtMethExtOptLabelScalarStmtDeclsDeclLocalObjLocalObjLocal1DeclareLocalPublicPrivateRelocsRelocUseRelocVarExtPkgDefTypeExtValCodeObjPosBaseLocalIdentTypeParamNamesTypeParamBoundsImplicitTypesObjectName"
var _syncMarker_index = [...]uint16{0, 4, 8, 13, 19, 25, 28, 31, 34, 42, 46, 50, 57, 66, 71, 73, 79, 83, 87, 91, 100, 108, 116, 123, 129, 134, 137, 143, 151, 154, 160, 169, 176, 179, 184, 193, 203, 209, 214, 220, 228, 236, 245, 254, 264, 272, 280, 287, 297, 304, 314, 318, 327, 331, 341, 350, 358, 366, 371, 379, 387, 400, 407, 413, 430, 433, 437, 442, 452, 465, 473, 481, 486, 495, 503, 512, 518, 525, 535, 544, 554, 564, 574, 579, 590, 597, 598, 599, 600, 601, 602, 613, 620, 631, 639, 652, 665, 673, 678, 686, 693, 701, 713, 724, 735, 745, 752, 759, 764, 771, 778, 786, 792, 801, 810, 818, 827, 839, 845, 852, 858, 863, 871, 877, 883, 890, 893, 900, 907, 917, 931, 946, 959, 969}
func (i syncMarker) String() string {
i -= 1
if i < 0 || i >= syncMarker(len(_syncMarker_index)-1) {
return "syncMarker(" + strconv.FormatInt(int64(i+1), 10) + ")"
}
return _syncMarker_name[_syncMarker_index[i]:_syncMarker_index[i+1]]
}

View file

@ -1046,13 +1046,7 @@ func transformCompLit(n *ir.CompLitExpr) (res ir.Node) {
kv := l.(*ir.KeyExpr) kv := l.(*ir.KeyExpr)
key := kv.Key key := kv.Key
// Sym might have resolved to name in other top-level
// package, because of import dot. Redirect to correct sym
// before we do the lookup.
s := key.Sym() s := key.Sym()
if id, ok := key.(*ir.Ident); ok && typecheck.DotImportRefs[id] != nil {
s = typecheck.Lookup(s.Name)
}
if types.IsExported(s.Name) && s.Pkg != types.LocalPkg { if types.IsExported(s.Name) && s.Pkg != types.LocalPkg {
// Exported field names should always have // Exported field names should always have
// local pkg. We only need to do this // local pkg. We only need to do this

View file

@ -10,11 +10,13 @@ import (
"bytes" "bytes"
"fmt" "fmt"
"internal/goversion" "internal/goversion"
"internal/pkgbits"
"io" "io"
"runtime" "runtime"
"sort" "sort"
"cmd/compile/internal/base" "cmd/compile/internal/base"
"cmd/compile/internal/importer"
"cmd/compile/internal/inline" "cmd/compile/internal/inline"
"cmd/compile/internal/ir" "cmd/compile/internal/ir"
"cmd/compile/internal/typecheck" "cmd/compile/internal/typecheck"
@ -72,18 +74,14 @@ var localPkgReader *pkgReader
func unified(noders []*noder) { func unified(noders []*noder) {
inline.NewInline = InlineCall inline.NewInline = InlineCall
if !quirksMode() {
writeNewExportFunc = writeNewExport writeNewExportFunc = writeNewExport
} else if base.Flag.G != 0 {
base.Errorf("cannot use -G and -d=quirksmode together")
}
newReadImportFunc = func(data string, pkg1 *types.Pkg, ctxt *types2.Context, packages map[string]*types2.Package) (pkg2 *types2.Package, err error) { newReadImportFunc = func(data string, pkg1 *types.Pkg, ctxt *types2.Context, packages map[string]*types2.Package) (pkg2 *types2.Package, err error) {
pr := newPkgDecoder(pkg1.Path, data) pr := pkgbits.NewPkgDecoder(pkg1.Path, data)
// Read package descriptors for both types2 and compiler backend. // Read package descriptors for both types2 and compiler backend.
readPackage(newPkgReader(pr), pkg1) readPackage(newPkgReader(pr), pkg1)
pkg2 = readPackage2(ctxt, packages, pr) pkg2 = importer.ReadPackage(ctxt, packages, pr)
return return
} }
@ -102,10 +100,10 @@ func unified(noders []*noder) {
typecheck.TypecheckAllowed = true typecheck.TypecheckAllowed = true
localPkgReader = newPkgReader(newPkgDecoder(types.LocalPkg.Path, data)) localPkgReader = newPkgReader(pkgbits.NewPkgDecoder(types.LocalPkg.Path, data))
readPackage(localPkgReader, types.LocalPkg) readPackage(localPkgReader, types.LocalPkg)
r := localPkgReader.newReader(relocMeta, privateRootIdx, syncPrivate) r := localPkgReader.newReader(pkgbits.RelocMeta, pkgbits.PrivateRootIdx, pkgbits.SyncPrivate)
r.pkgInit(types.LocalPkg, target) r.pkgInit(types.LocalPkg, target)
// Type-check any top-level assignments. We ignore non-assignments // Type-check any top-level assignments. We ignore non-assignments
@ -166,36 +164,36 @@ func writePkgStub(noders []*noder) string {
pw.collectDecls(noders) pw.collectDecls(noders)
publicRootWriter := pw.newWriter(relocMeta, syncPublic) publicRootWriter := pw.newWriter(pkgbits.RelocMeta, pkgbits.SyncPublic)
privateRootWriter := pw.newWriter(relocMeta, syncPrivate) privateRootWriter := pw.newWriter(pkgbits.RelocMeta, pkgbits.SyncPrivate)
assert(publicRootWriter.idx == publicRootIdx) assert(publicRootWriter.Idx == pkgbits.PublicRootIdx)
assert(privateRootWriter.idx == privateRootIdx) assert(privateRootWriter.Idx == pkgbits.PrivateRootIdx)
{ {
w := publicRootWriter w := publicRootWriter
w.pkg(pkg) w.pkg(pkg)
w.bool(false) // has init; XXX w.Bool(false) // has init; XXX
scope := pkg.Scope() scope := pkg.Scope()
names := scope.Names() names := scope.Names()
w.len(len(names)) w.Len(len(names))
for _, name := range scope.Names() { for _, name := range scope.Names() {
w.obj(scope.Lookup(name), nil) w.obj(scope.Lookup(name), nil)
} }
w.sync(syncEOF) w.Sync(pkgbits.SyncEOF)
w.flush() w.Flush()
} }
{ {
w := privateRootWriter w := privateRootWriter
w.pkgInit(noders) w.pkgInit(noders)
w.flush() w.Flush()
} }
var sb bytes.Buffer // TODO(mdempsky): strings.Builder after #44505 is resolved var sb bytes.Buffer // TODO(mdempsky): strings.Builder after #44505 is resolved
pw.dump(&sb) pw.DumpTo(&sb)
// At this point, we're done with types2. Make sure the package is // At this point, we're done with types2. Make sure the package is
// garbage collected. // garbage collected.
@ -239,26 +237,26 @@ func freePackage(pkg *types2.Package) {
} }
func readPackage(pr *pkgReader, importpkg *types.Pkg) { func readPackage(pr *pkgReader, importpkg *types.Pkg) {
r := pr.newReader(relocMeta, publicRootIdx, syncPublic) r := pr.newReader(pkgbits.RelocMeta, pkgbits.PublicRootIdx, pkgbits.SyncPublic)
pkg := r.pkg() pkg := r.pkg()
assert(pkg == importpkg) assert(pkg == importpkg)
if r.bool() { if r.Bool() {
sym := pkg.Lookup(".inittask") sym := pkg.Lookup(".inittask")
task := ir.NewNameAt(src.NoXPos, sym) task := ir.NewNameAt(src.NoXPos, sym)
task.Class = ir.PEXTERN task.Class = ir.PEXTERN
sym.Def = task sym.Def = task
} }
for i, n := 0, r.len(); i < n; i++ { for i, n := 0, r.Len(); i < n; i++ {
r.sync(syncObject) r.Sync(pkgbits.SyncObject)
assert(!r.bool()) assert(!r.Bool())
idx := r.reloc(relocObj) idx := r.Reloc(pkgbits.RelocObj)
assert(r.len() == 0) assert(r.Len() == 0)
path, name, code := r.p.peekObj(idx) path, name, code := r.p.PeekObj(idx)
if code != objStub { if code != pkgbits.ObjStub {
objReader[types.NewPkg(path, "").Lookup(name)] = pkgReaderIndex{pr, idx, nil} objReader[types.NewPkg(path, "").Lookup(name)] = pkgReaderIndex{pr, idx, nil}
} }
} }
@ -266,42 +264,42 @@ func readPackage(pr *pkgReader, importpkg *types.Pkg) {
func writeNewExport(out io.Writer) { func writeNewExport(out io.Writer) {
l := linker{ l := linker{
pw: newPkgEncoder(), pw: pkgbits.NewPkgEncoder(base.Debug.SyncFrames),
pkgs: make(map[string]int), pkgs: make(map[string]int),
decls: make(map[*types.Sym]int), decls: make(map[*types.Sym]int),
} }
publicRootWriter := l.pw.newEncoder(relocMeta, syncPublic) publicRootWriter := l.pw.NewEncoder(pkgbits.RelocMeta, pkgbits.SyncPublic)
assert(publicRootWriter.idx == publicRootIdx) assert(publicRootWriter.Idx == pkgbits.PublicRootIdx)
var selfPkgIdx int var selfPkgIdx int
{ {
pr := localPkgReader pr := localPkgReader
r := pr.newDecoder(relocMeta, publicRootIdx, syncPublic) r := pr.NewDecoder(pkgbits.RelocMeta, pkgbits.PublicRootIdx, pkgbits.SyncPublic)
r.sync(syncPkg) r.Sync(pkgbits.SyncPkg)
selfPkgIdx = l.relocIdx(pr, relocPkg, r.reloc(relocPkg)) selfPkgIdx = l.relocIdx(pr, pkgbits.RelocPkg, r.Reloc(pkgbits.RelocPkg))
r.bool() // has init r.Bool() // has init
for i, n := 0, r.len(); i < n; i++ { for i, n := 0, r.Len(); i < n; i++ {
r.sync(syncObject) r.Sync(pkgbits.SyncObject)
assert(!r.bool()) assert(!r.Bool())
idx := r.reloc(relocObj) idx := r.Reloc(pkgbits.RelocObj)
assert(r.len() == 0) assert(r.Len() == 0)
xpath, xname, xtag := pr.peekObj(idx) xpath, xname, xtag := pr.PeekObj(idx)
assert(xpath == pr.pkgPath) assert(xpath == pr.PkgPath())
assert(xtag != objStub) assert(xtag != pkgbits.ObjStub)
if types.IsExported(xname) { if types.IsExported(xname) {
l.relocIdx(pr, relocObj, idx) l.relocIdx(pr, pkgbits.RelocObj, idx)
} }
} }
r.sync(syncEOF) r.Sync(pkgbits.SyncEOF)
} }
{ {
@ -313,22 +311,22 @@ func writeNewExport(out io.Writer) {
w := publicRootWriter w := publicRootWriter
w.sync(syncPkg) w.Sync(pkgbits.SyncPkg)
w.reloc(relocPkg, selfPkgIdx) w.Reloc(pkgbits.RelocPkg, selfPkgIdx)
w.bool(typecheck.Lookup(".inittask").Def != nil) w.Bool(typecheck.Lookup(".inittask").Def != nil)
w.len(len(idxs)) w.Len(len(idxs))
for _, idx := range idxs { for _, idx := range idxs {
w.sync(syncObject) w.Sync(pkgbits.SyncObject)
w.bool(false) w.Bool(false)
w.reloc(relocObj, idx) w.Reloc(pkgbits.RelocObj, idx)
w.len(0) w.Len(0)
} }
w.sync(syncEOF) w.Sync(pkgbits.SyncEOF)
w.flush() w.Flush()
} }
l.pw.dump(out) l.pw.DumpTo(out)
} }

View file

@ -1,160 +0,0 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package noder_test
import (
"encoding/json"
"flag"
exec "internal/execabs"
"os"
"reflect"
"runtime"
"strings"
"testing"
)
var (
flagCmp = flag.Bool("cmp", false, "enable TestUnifiedCompare")
flagPkgs = flag.String("pkgs", "std", "list of packages to compare (ignored in -short mode)")
flagAll = flag.Bool("all", false, "enable testing of all GOOS/GOARCH targets")
flagParallel = flag.Bool("parallel", false, "test GOOS/GOARCH targets in parallel")
)
// TestUnifiedCompare implements a test similar to running:
//
// $ go build -toolexec="toolstash -cmp" std
//
// The -pkgs flag controls the list of packages tested.
//
// By default, only the native GOOS/GOARCH target is enabled. The -all
// flag enables testing of non-native targets. The -parallel flag
// additionally enables testing of targets in parallel.
//
// Caution: Testing all targets is very resource intensive! On an IBM
// P920 (dual Intel Xeon Gold 6154 CPUs; 36 cores, 192GB RAM), testing
// all targets in parallel takes about 5 minutes. Using the 'go test'
// command's -run flag for subtest matching is recommended for less
// powerful machines.
func TestUnifiedCompare(t *testing.T) {
// TODO(mdempsky): Either re-enable or delete. Disabled for now to
// avoid impeding others' forward progress.
if !*flagCmp {
t.Skip("skipping TestUnifiedCompare (use -cmp to enable)")
}
targets, err := exec.Command("go", "tool", "dist", "list").Output()
if err != nil {
t.Fatal(err)
}
for _, target := range strings.Fields(string(targets)) {
t.Run(target, func(t *testing.T) {
parts := strings.Split(target, "/")
goos, goarch := parts[0], parts[1]
if !(*flagAll || goos == runtime.GOOS && goarch == runtime.GOARCH) {
t.Skip("skipping non-native target (use -all to enable)")
}
if *flagParallel {
t.Parallel()
}
pkgs1 := loadPackages(t, goos, goarch, "-d=unified=0 -d=inlfuncswithclosures=0 -d=unifiedquirks=1 -G=0")
pkgs2 := loadPackages(t, goos, goarch, "-d=unified=1 -d=inlfuncswithclosures=0 -d=unifiedquirks=1 -G=0")
if len(pkgs1) != len(pkgs2) {
t.Fatalf("length mismatch: %v != %v", len(pkgs1), len(pkgs2))
}
for i := range pkgs1 {
pkg1 := pkgs1[i]
pkg2 := pkgs2[i]
path := pkg1.ImportPath
if path != pkg2.ImportPath {
t.Fatalf("mismatched paths: %q != %q", path, pkg2.ImportPath)
}
// Packages that don't have any source files (e.g., packages
// unsafe, embed/internal/embedtest, and cmd/internal/moddeps).
if pkg1.Export == "" && pkg2.Export == "" {
continue
}
if pkg1.BuildID == pkg2.BuildID {
t.Errorf("package %q: build IDs unexpectedly matched", path)
}
// Unlike toolstash -cmp, we're comparing the same compiler
// binary against itself, just with different flags. So we
// don't need to worry about skipping over mismatched version
// strings, but we do need to account for differing build IDs.
//
// Fortunately, build IDs are cryptographic 256-bit hashes,
// and cmd/go provides us with them up front. So we can just
// use them as delimeters to split the files, and then check
// that the substrings are all equal.
file1 := strings.Split(readFile(t, pkg1.Export), pkg1.BuildID)
file2 := strings.Split(readFile(t, pkg2.Export), pkg2.BuildID)
if !reflect.DeepEqual(file1, file2) {
t.Errorf("package %q: compile output differs", path)
}
}
})
}
}
type pkg struct {
ImportPath string
Export string
BuildID string
Incomplete bool
}
func loadPackages(t *testing.T, goos, goarch, gcflags string) []pkg {
args := []string{"list", "-e", "-export", "-json", "-gcflags=all=" + gcflags, "--"}
if testing.Short() {
t.Log("short testing mode; only testing package runtime")
args = append(args, "runtime")
} else {
args = append(args, strings.Fields(*flagPkgs)...)
}
cmd := exec.Command("go", args...)
cmd.Env = append(os.Environ(), "GOOS="+goos, "GOARCH="+goarch)
cmd.Stderr = os.Stderr
t.Logf("running %v", cmd)
stdout, err := cmd.StdoutPipe()
if err != nil {
t.Fatal(err)
}
if err := cmd.Start(); err != nil {
t.Fatal(err)
}
var res []pkg
for dec := json.NewDecoder(stdout); dec.More(); {
var pkg pkg
if err := dec.Decode(&pkg); err != nil {
t.Fatal(err)
}
if pkg.Incomplete {
t.Fatalf("incomplete package: %q", pkg.ImportPath)
}
res = append(res, pkg)
}
if err := cmd.Wait(); err != nil {
t.Fatal(err)
}
return res
}
func readFile(t *testing.T, name string) string {
buf, err := os.ReadFile(name)
if err != nil {
t.Fatal(err)
}
return string(buf)
}

File diff suppressed because it is too large Load diff

View file

@ -46,10 +46,9 @@ func zerorange(pp *objw.Progs, p *obj.Prog, off, cnt int64, _ *uint32) *obj.Prog
} }
func ginsnop(pp *objw.Progs) *obj.Prog { func ginsnop(pp *objw.Progs) *obj.Prog {
// Generate the preferred hardware nop: ori 0,0,0
p := pp.Prog(ppc64.AOR) p := pp.Prog(ppc64.AOR)
p.From.Type = obj.TYPE_REG p.From = obj.Addr{Type: obj.TYPE_CONST, Offset: 0}
p.From.Reg = ppc64.REG_R0 p.To = obj.Addr{Type: obj.TYPE_REG, Reg: ppc64.REG_R0}
p.To.Type = obj.TYPE_REG
p.To.Reg = ppc64.REG_R0
return p return p
} }

View file

@ -1424,9 +1424,7 @@ func WriteBasicTypes() {
} }
writeType(types.NewPtr(types.Types[types.TSTRING])) writeType(types.NewPtr(types.Types[types.TSTRING]))
writeType(types.NewPtr(types.Types[types.TUNSAFEPTR])) writeType(types.NewPtr(types.Types[types.TUNSAFEPTR]))
if base.Flag.G > 0 {
writeType(types.AnyType) writeType(types.AnyType)
}
// emit type structs for error and func(error) string. // emit type structs for error and func(error) string.
// The latter is the type of an auto-generated wrapper. // The latter is the type of an auto-generated wrapper.
@ -1457,7 +1455,7 @@ func WriteBasicTypes() {
type typeAndStr struct { type typeAndStr struct {
t *types.Type t *types.Type
short string // "short" here means NameString short string // "short" here means TypeSymName
regular string regular string
} }
@ -1853,8 +1851,8 @@ func methodWrapper(rcvr *types.Type, method *types.Field, forItab bool) *obj.LSy
} }
newnam.SetSiggen(true) newnam.SetSiggen(true)
// Except in quirks mode, unified IR creates its own wrappers. // Unified IR creates its own wrappers.
if base.Debug.Unified != 0 && base.Debug.UnifiedQuirks == 0 { if base.Debug.Unified != 0 {
return lsym return lsym
} }

View file

@ -78,7 +78,7 @@ func TestDebugLinesPushback(t *testing.T) {
// Unified mangles differently // Unified mangles differently
fn = "(*List[int]).PushBack" fn = "(*List[int]).PushBack"
} }
testDebugLines(t, "-N -l -G=3", "pushback.go", fn, []int{17, 18, 19, 20, 21, 22, 24}, true) testDebugLines(t, "-N -l", "pushback.go", fn, []int{17, 18, 19, 20, 21, 22, 24}, true)
} }
} }
@ -97,7 +97,7 @@ func TestDebugLinesConvert(t *testing.T) {
// Unified mangles differently // Unified mangles differently
fn = "G[int]" fn = "G[int]"
} }
testDebugLines(t, "-N -l -G=3", "convertline.go", fn, []int{9, 10, 11}, true) testDebugLines(t, "-N -l", "convertline.go", fn, []int{9, 10, 11}, true)
} }
} }

View file

@ -256,7 +256,7 @@
(Leq64F ...) => (FLED ...) (Leq64F ...) => (FLED ...)
(Leq32F ...) => (FLES ...) (Leq32F ...) => (FLES ...)
(EqPtr x y) => (SEQZ (SUB <x.Type> x y)) (EqPtr x y) => (SEQZ (SUB <typ.Uintptr> x y))
(Eq64 x y) => (SEQZ (SUB <x.Type> x y)) (Eq64 x y) => (SEQZ (SUB <x.Type> x y))
(Eq32 x y) => (SEQZ (SUB <x.Type> (ZeroExt32to64 x) (ZeroExt32to64 y))) (Eq32 x y) => (SEQZ (SUB <x.Type> (ZeroExt32to64 x) (ZeroExt32to64 y)))
(Eq16 x y) => (SEQZ (SUB <x.Type> (ZeroExt16to64 x) (ZeroExt16to64 y))) (Eq16 x y) => (SEQZ (SUB <x.Type> (ZeroExt16to64 x) (ZeroExt16to64 y)))
@ -264,7 +264,7 @@
(Eq64F ...) => (FEQD ...) (Eq64F ...) => (FEQD ...)
(Eq32F ...) => (FEQS ...) (Eq32F ...) => (FEQS ...)
(NeqPtr x y) => (SNEZ (SUB <x.Type> x y)) (NeqPtr x y) => (SNEZ (SUB <typ.Uintptr> x y))
(Neq64 x y) => (SNEZ (SUB <x.Type> x y)) (Neq64 x y) => (SNEZ (SUB <x.Type> x y))
(Neq32 x y) => (SNEZ (SUB <x.Type> (ZeroExt32to64 x) (ZeroExt32to64 y))) (Neq32 x y) => (SNEZ (SUB <x.Type> (ZeroExt32to64 x) (ZeroExt32to64 y)))
(Neq16 x y) => (SNEZ (SUB <x.Type> (ZeroExt16to64 x) (ZeroExt16to64 y))) (Neq16 x y) => (SNEZ (SUB <x.Type> (ZeroExt16to64 x) (ZeroExt16to64 y)))

View file

@ -906,7 +906,7 @@ func (po *poset) Ordered(n1, n2 *Value) bool {
return i1 != i2 && po.reaches(i1, i2, true) return i1 != i2 && po.reaches(i1, i2, true)
} }
// Ordered reports whether n1<=n2. It returns false either when it is // OrderedOrEqual reports whether n1<=n2. It returns false either when it is
// certain that n1<=n2 is false, or if there is not enough information // certain that n1<=n2 is false, or if there is not enough information
// to tell. // to tell.
// Complexity is O(n). // Complexity is O(n).

View file

@ -1124,13 +1124,14 @@ func rewriteValueRISCV64_OpEqPtr(v *Value) bool {
v_1 := v.Args[1] v_1 := v.Args[1]
v_0 := v.Args[0] v_0 := v.Args[0]
b := v.Block b := v.Block
typ := &b.Func.Config.Types
// match: (EqPtr x y) // match: (EqPtr x y)
// result: (SEQZ (SUB <x.Type> x y)) // result: (SEQZ (SUB <typ.Uintptr> x y))
for { for {
x := v_0 x := v_0
y := v_1 y := v_1
v.reset(OpRISCV64SEQZ) v.reset(OpRISCV64SEQZ)
v0 := b.NewValue0(v.Pos, OpRISCV64SUB, x.Type) v0 := b.NewValue0(v.Pos, OpRISCV64SUB, typ.Uintptr)
v0.AddArg2(x, y) v0.AddArg2(x, y)
v.AddArg(v0) v.AddArg(v0)
return true return true
@ -2673,13 +2674,14 @@ func rewriteValueRISCV64_OpNeqPtr(v *Value) bool {
v_1 := v.Args[1] v_1 := v.Args[1]
v_0 := v.Args[0] v_0 := v.Args[0]
b := v.Block b := v.Block
typ := &b.Func.Config.Types
// match: (NeqPtr x y) // match: (NeqPtr x y)
// result: (SNEZ (SUB <x.Type> x y)) // result: (SNEZ (SUB <typ.Uintptr> x y))
for { for {
x := v_0 x := v_0
y := v_1 y := v_1
v.reset(OpRISCV64SNEZ) v.reset(OpRISCV64SNEZ)
v0 := b.NewValue0(v.Pos, OpRISCV64SUB, x.Type) v0 := b.NewValue0(v.Pos, OpRISCV64SUB, typ.Uintptr)
v0.AddArg2(x, y) v0.AddArg2(x, y)
v.AddArg(v0) v.AddArg(v0)
return true return true

View file

@ -2382,7 +2382,7 @@ func (s *state) ssaShiftOp(op ir.Op, t *types.Type, u *types.Type) ssa.Op {
func (s *state) conv(n ir.Node, v *ssa.Value, ft, tt *types.Type) *ssa.Value { func (s *state) conv(n ir.Node, v *ssa.Value, ft, tt *types.Type) *ssa.Value {
if ft.IsBoolean() && tt.IsKind(types.TUINT8) { if ft.IsBoolean() && tt.IsKind(types.TUINT8) {
// Bool -> uint8 is generated internally when indexing into runtime.staticbyte. // Bool -> uint8 is generated internally when indexing into runtime.staticbyte.
return s.newValue1(ssa.OpCopy, tt, v) return s.newValue1(ssa.OpCvtBoolToUint8, tt, v)
} }
if ft.IsInteger() && tt.IsInteger() { if ft.IsInteger() && tt.IsInteger() {
var op ssa.Op var op ssa.Op
@ -6768,6 +6768,34 @@ func EmitArgInfo(f *ir.Func, abiInfo *abi.ABIParamResultInfo) *obj.LSym {
return x return x
} }
// for wrapper, emit info of wrapped function.
func emitWrappedFuncInfo(e *ssafn, pp *objw.Progs) {
if base.Ctxt.Flag_linkshared {
// Relative reference (SymPtrOff) to another shared object doesn't work.
// Unfortunate.
return
}
wfn := e.curfn.WrappedFunc
if wfn == nil {
return
}
wsym := wfn.Linksym()
x := base.Ctxt.LookupInit(fmt.Sprintf("%s.wrapinfo", wsym.Name), func(x *obj.LSym) {
objw.SymPtrOff(x, 0, wsym)
x.Set(obj.AttrContentAddressable, true)
})
e.curfn.LSym.Func().WrapInfo = x
// Emit a funcdata pointing at the wrap info data.
p := pp.Prog(obj.AFUNCDATA)
p.From.SetConst(objabi.FUNCDATA_WrapInfo)
p.To.Type = obj.TYPE_MEM
p.To.Name = obj.NAME_EXTERN
p.To.Sym = x
}
// genssa appends entries to pp for each instruction in f. // genssa appends entries to pp for each instruction in f.
func genssa(f *ssa.Func, pp *objw.Progs) { func genssa(f *ssa.Func, pp *objw.Progs) {
var s State var s State
@ -6790,6 +6818,8 @@ func genssa(f *ssa.Func, pp *objw.Progs) {
p.To.Sym = openDeferInfo p.To.Sym = openDeferInfo
} }
emitWrappedFuncInfo(e, pp)
// Remember where each block starts. // Remember where each block starts.
s.bstart = make([]*obj.Prog, f.NumBlocks()) s.bstart = make([]*obj.Prog, f.NumBlocks())
s.pp = pp s.pp = pp

View file

@ -521,7 +521,6 @@ func AnySideEffects(n ir.Node) bool {
case ir.ONAME, case ir.ONAME,
ir.ONONAME, ir.ONONAME,
ir.OTYPE, ir.OTYPE,
ir.OPACK,
ir.OLITERAL, ir.OLITERAL,
ir.ONIL, ir.ONIL,
ir.OADD, ir.OADD,

View file

@ -6,7 +6,7 @@ package main
import ( import (
"fmt" "fmt"
"./mysort" "cmd/compile/internal/test/testdata/mysort"
) )
type MyString struct { type MyString struct {

View file

@ -70,14 +70,6 @@ func Declare(n *ir.Name, ctxt ir.Class) {
n.SetFrameOffset(0) n.SetFrameOffset(0)
} }
if s.Block == types.Block {
// functype will print errors about duplicate function arguments.
// Don't repeat the error here.
if ctxt != ir.PPARAM && ctxt != ir.PPARAMOUT {
Redeclared(n.Pos(), s, "in this block")
}
}
s.Block = types.Block s.Block = types.Block
s.Lastlineno = base.Pos s.Lastlineno = base.Pos
s.Def = n s.Def = n
@ -103,38 +95,6 @@ func Export(n *ir.Name) {
Target.Exports = append(Target.Exports, n) Target.Exports = append(Target.Exports, n)
} }
// Redeclared emits a diagnostic about symbol s being redeclared at pos.
func Redeclared(pos src.XPos, s *types.Sym, where string) {
if !s.Lastlineno.IsKnown() {
var pkgName *ir.PkgName
if s.Def == nil {
for id, pkg := range DotImportRefs {
if id.Sym().Name == s.Name {
pkgName = pkg
break
}
}
} else {
pkgName = DotImportRefs[s.Def.(*ir.Ident)]
}
base.ErrorfAt(pos, "%v redeclared %s\n"+
"\t%v: previous declaration during import %q", s, where, base.FmtPos(pkgName.Pos()), pkgName.Pkg.Path)
} else {
prevPos := s.Lastlineno
// When an import and a declaration collide in separate files,
// present the import as the "redeclared", because the declaration
// is visible where the import is, but not vice versa.
// See issue 4510.
if s.Def == nil {
pos, prevPos = prevPos, pos
}
base.ErrorfAt(pos, "%v redeclared %s\n"+
"\t%v: previous declaration", s, where, base.FmtPos(prevPos))
}
}
// declare the function proper // declare the function proper
// and declare the arguments. // and declare the arguments.
// called in extern-declaration context // called in extern-declaration context
@ -171,90 +131,6 @@ func CheckFuncStack() {
} }
} }
// Add a method, declared as a function.
// - msym is the method symbol
// - t is function type (with receiver)
// Returns a pointer to the existing or added Field; or nil if there's an error.
func addmethod(n *ir.Func, msym *types.Sym, t *types.Type, local, nointerface bool) *types.Field {
if msym == nil {
base.Fatalf("no method symbol")
}
// get parent type sym
rf := t.Recv() // ptr to this structure
if rf == nil {
base.Errorf("missing receiver")
return nil
}
mt := types.ReceiverBaseType(rf.Type)
if mt == nil || mt.Sym() == nil {
pa := rf.Type
t := pa
if t != nil && t.IsPtr() {
if t.Sym() != nil {
base.Errorf("invalid receiver type %v (%v is a pointer type)", pa, t)
return nil
}
t = t.Elem()
}
switch {
case t == nil || t.Broke():
// rely on typecheck having complained before
case t.Sym() == nil:
base.Errorf("invalid receiver type %v (%v is not a defined type)", pa, t)
case t.IsPtr():
base.Errorf("invalid receiver type %v (%v is a pointer type)", pa, t)
case t.IsInterface():
base.Errorf("invalid receiver type %v (%v is an interface type)", pa, t)
default:
// Should have picked off all the reasons above,
// but just in case, fall back to generic error.
base.Errorf("invalid receiver type %v (%L / %L)", pa, pa, t)
}
return nil
}
if local && mt.Sym().Pkg != types.LocalPkg {
base.Errorf("cannot define new methods on non-local type %v", mt)
return nil
}
if msym.IsBlank() {
return nil
}
if mt.IsStruct() {
for _, f := range mt.Fields().Slice() {
if f.Sym == msym {
base.Errorf("type %v has both field and method named %v", mt, msym)
f.SetBroke(true)
return nil
}
}
}
for _, f := range mt.Methods().Slice() {
if msym.Name != f.Sym.Name {
continue
}
// types.Identical only checks that incoming and result parameters match,
// so explicitly check that the receiver parameters match too.
if !types.Identical(t, f.Type) || !types.Identical(t.Recv().Type, f.Type.Recv().Type) {
base.Errorf("method redeclared: %v.%v\n\t%v\n\t%v", mt, msym, f.Type, t)
}
return f
}
f := types.NewField(base.Pos, msym, t)
f.Nname = n.Nname
f.SetNointerface(nointerface)
mt.Methods().Append(f)
return f
}
func autoexport(n *ir.Name, ctxt ir.Class) { func autoexport(n *ir.Name, ctxt ir.Class) {
if n.Sym().Pkg != types.LocalPkg { if n.Sym().Pkg != types.LocalPkg {
return return
@ -455,13 +331,6 @@ func autotmpname(n int) string {
// Add a preceding . to avoid clashing with legal names. // Add a preceding . to avoid clashing with legal names.
prefix := ".autotmp_%d" prefix := ".autotmp_%d"
// In quirks mode, pad out the number to stabilize variable
// sorting. This ensures autotmps 8 and 9 sort the same way even
// if they get renumbered to 9 and 10, respectively.
if base.Debug.UnifiedQuirks != 0 {
prefix = ".autotmp_%06d"
}
s = fmt.Sprintf(prefix, n) s = fmt.Sprintf(prefix, n)
autotmpnames[n] = s autotmpnames[n] = s
} }

View file

@ -220,21 +220,6 @@ func tcCompLit(n *ir.CompLitExpr) (res ir.Node) {
ir.SetPos(n.Ntype) ir.SetPos(n.Ntype)
// Need to handle [...]T arrays specially.
if array, ok := n.Ntype.(*ir.ArrayType); ok && array.Elem != nil && array.Len == nil {
array.Elem = typecheckNtype(array.Elem)
elemType := array.Elem.Type()
if elemType == nil {
n.SetType(nil)
return n
}
length := typecheckarraylit(elemType, -1, n.List, "array literal")
n.SetOp(ir.OARRAYLIT)
n.SetType(types.NewArray(elemType, length))
n.Ntype = nil
return n
}
n.Ntype = typecheckNtype(n.Ntype) n.Ntype = typecheckNtype(n.Ntype)
t := n.Ntype.Type() t := n.Ntype.Type()
if t == nil { if t == nil {
@ -375,13 +360,7 @@ func tcCompLit(n *ir.CompLitExpr) (res ir.Node) {
func tcStructLitKey(typ *types.Type, kv *ir.KeyExpr) *ir.StructKeyExpr { func tcStructLitKey(typ *types.Type, kv *ir.KeyExpr) *ir.StructKeyExpr {
key := kv.Key key := kv.Key
// Sym might have resolved to name in other top-level
// package, because of import dot. Redirect to correct sym
// before we do the lookup.
sym := key.Sym() sym := key.Sym()
if id, ok := key.(*ir.Ident); ok && DotImportRefs[id] != nil {
sym = Lookup(sym.Name)
}
// An OXDOT uses the Sym field to hold // An OXDOT uses the Sym field to hold
// the field to the right of the dot, // the field to the right of the dot,

View file

@ -302,20 +302,6 @@ func tcFunc(n *ir.Func) {
} }
n.Nname = AssignExpr(n.Nname).(*ir.Name) n.Nname = AssignExpr(n.Nname).(*ir.Name)
t := n.Nname.Type()
if t == nil {
return
}
rcvr := t.Recv()
if rcvr != nil && n.Shortname != nil {
m := addmethod(n, n.Shortname, t, true, n.Pragma&ir.Nointerface != 0)
if m == nil {
return
}
n.Nname.SetSym(ir.MethodSym(rcvr.Type, n.Shortname))
Declare(n.Nname, ir.PFUNC)
}
} }
// tcCall typechecks an OCALL node. // tcCall typechecks an OCALL node.

View file

@ -607,7 +607,7 @@ func (p *iexporter) doDecl(n *ir.Name) {
// Do same for ComparableType as for ErrorType. // Do same for ComparableType as for ErrorType.
underlying = types.ComparableType underlying = types.ComparableType
} }
if base.Flag.G > 0 && underlying == types.AnyType.Underlying() { if underlying == types.AnyType.Underlying() {
// Do same for AnyType as for ErrorType. // Do same for AnyType as for ErrorType.
underlying = types.AnyType underlying = types.AnyType
} }
@ -621,12 +621,7 @@ func (p *iexporter) doDecl(n *ir.Name) {
break break
} }
// Sort methods, for consistency with types2. methods := t.Methods().Slice()
methods := append([]*types.Field(nil), t.Methods().Slice()...)
if base.Debug.UnifiedQuirks != 0 {
sort.Sort(types.MethodsByName(methods))
}
w.uint64(uint64(len(methods))) w.uint64(uint64(len(methods)))
for _, m := range methods { for _, m := range methods {
w.pos(m.Pos) w.pos(m.Pos)
@ -954,7 +949,6 @@ func (w *exportWriter) startType(k itag) {
func (w *exportWriter) doTyp(t *types.Type) { func (w *exportWriter) doTyp(t *types.Type) {
s := t.Sym() s := t.Sym()
if s != nil && t.OrigSym() != nil { if s != nil && t.OrigSym() != nil {
assert(base.Flag.G > 0)
// This is an instantiated type - could be a re-instantiation like // This is an instantiated type - could be a re-instantiation like
// Value[T2] or a full instantiation like Value[int]. // Value[T2] or a full instantiation like Value[int].
if strings.Index(s.Name, "[") < 0 { if strings.Index(s.Name, "[") < 0 {
@ -979,7 +973,6 @@ func (w *exportWriter) doTyp(t *types.Type) {
// type, rather than a defined type with typeparam underlying type, like: // type, rather than a defined type with typeparam underlying type, like:
// type orderedAbs[T any] T // type orderedAbs[T any] T
if t.IsTypeParam() && t.Underlying() == t { if t.IsTypeParam() && t.Underlying() == t {
assert(base.Flag.G > 0)
if s.Pkg == types.BuiltinPkg || s.Pkg == types.UnsafePkg { if s.Pkg == types.BuiltinPkg || s.Pkg == types.UnsafePkg {
base.Fatalf("builtin type missing from typIndex: %v", t) base.Fatalf("builtin type missing from typIndex: %v", t)
} }
@ -1052,14 +1045,6 @@ func (w *exportWriter) doTyp(t *types.Type) {
} }
} }
// Sort methods and embedded types, for consistency with types2.
// Note: embedded types may be anonymous, and types2 sorts them
// with sort.Stable too.
if base.Debug.UnifiedQuirks != 0 {
sort.Sort(types.MethodsByName(methods))
sort.Stable(types.EmbeddedsByName(embeddeds))
}
w.startType(interfaceType) w.startType(interfaceType)
w.setPkg(t.Pkg(), true) w.setPkg(t.Pkg(), true)
@ -1077,7 +1062,6 @@ func (w *exportWriter) doTyp(t *types.Type) {
} }
case types.TUNION: case types.TUNION:
assert(base.Flag.G > 0)
// TODO(danscales): possibly put out the tilde bools in more // TODO(danscales): possibly put out the tilde bools in more
// compact form. // compact form.
w.startType(unionType) w.startType(unionType)

View file

@ -354,15 +354,18 @@ func (r *importReader) doDecl(sym *types.Sym) *ir.Name {
// declaration before recursing. // declaration before recursing.
n := importtype(pos, sym) n := importtype(pos, sym)
t := n.Type() t := n.Type()
// Because of recursion, we need to defer width calculations and
// instantiations on intermediate types until the top-level type is
// fully constructed. Note that we can have recursion via type
// constraints.
types.DeferCheckSize()
deferDoInst()
if tag == 'U' { if tag == 'U' {
rparams := r.typeList() rparams := r.typeList()
t.SetRParams(rparams) t.SetRParams(rparams)
} }
// We also need to defer width calculations until
// after the underlying type has been assigned.
types.DeferCheckSize()
deferDoInst()
underlying := r.typ() underlying := r.typ()
t.SetUnderlying(underlying) t.SetUnderlying(underlying)

View file

@ -22,10 +22,6 @@ func AssignConv(n ir.Node, t *types.Type, context string) ir.Node {
return assignconvfn(n, t, func() string { return context }) return assignconvfn(n, t, func() string { return context })
} }
// DotImportRefs maps idents introduced by importDot back to the
// ir.PkgName they were dot-imported through.
var DotImportRefs map[*ir.Ident]*ir.PkgName
// LookupNum looks up the symbol starting with prefix and ending with // LookupNum looks up the symbol starting with prefix and ending with
// the decimal n. If prefix is too long, LookupNum panics. // the decimal n. If prefix is too long, LookupNum panics.
func LookupNum(prefix string, n int) *types.Sym { func LookupNum(prefix string, n int) *types.Sym {
@ -1424,6 +1420,68 @@ func genericTypeName(sym *types.Sym) string {
return sym.Name[0:strings.Index(sym.Name, "[")] return sym.Name[0:strings.Index(sym.Name, "[")]
} }
// getShapes appends the list of the shape types that are used within type t to
// listp. The type traversal is simplified for two reasons: (1) we can always stop a
// type traversal when t.HasShape() is false; and (2) shape types can't appear inside
// a named type, except for the type args of a generic type. So, the traversal will
// always stop before we have to deal with recursive types.
func getShapes(t *types.Type, listp *[]*types.Type) {
if !t.HasShape() {
return
}
if t.IsShape() {
*listp = append(*listp, t)
return
}
if t.Sym() != nil {
// A named type can't have shapes in it, except for type args of a
// generic type. We will have to deal with this differently once we
// alloc local types in generic functions (#47631).
for _, rparam := range t.RParams() {
getShapes(rparam, listp)
}
return
}
switch t.Kind() {
case types.TARRAY, types.TPTR, types.TSLICE, types.TCHAN:
getShapes(t.Elem(), listp)
case types.TSTRUCT:
for _, f := range t.FieldSlice() {
getShapes(f.Type, listp)
}
case types.TFUNC:
for _, f := range t.Recvs().FieldSlice() {
getShapes(f.Type, listp)
}
for _, f := range t.Params().FieldSlice() {
getShapes(f.Type, listp)
}
for _, f := range t.Results().FieldSlice() {
getShapes(f.Type, listp)
}
for _, f := range t.TParams().FieldSlice() {
getShapes(f.Type, listp)
}
case types.TINTER:
for _, f := range t.Methods().Slice() {
getShapes(f.Type, listp)
}
case types.TMAP:
getShapes(t.Key(), listp)
getShapes(t.Elem(), listp)
default:
panic(fmt.Sprintf("Bad type in getShapes: %v", t.Kind()))
}
}
// Shapify takes a concrete type and a type param index, and returns a GCshape type that can // Shapify takes a concrete type and a type param index, and returns a GCshape type that can
// be used in place of the input type and still generate identical code. // be used in place of the input type and still generate identical code.
// No methods are added - all methods calls directly on a shape should // No methods are added - all methods calls directly on a shape should
@ -1432,9 +1490,9 @@ func genericTypeName(sym *types.Sym) string {
// For now, we only consider two types to have the same shape, if they have exactly // For now, we only consider two types to have the same shape, if they have exactly
// the same underlying type or they are both pointer types. // the same underlying type or they are both pointer types.
// //
// tparam is the associated typeparam. If there is a structural type for // tparam is the associated typeparam - it must be TTYPEPARAM type. If there is a
// the associated type param (not common), then a pointer type t is mapped to its // structural type for the associated type param (not common), then a pointer type t
// underlying type, rather than being merged with other pointers. // is mapped to its underlying type, rather than being merged with other pointers.
// //
// Shape types are also distinguished by the index of the type in a type param/arg // Shape types are also distinguished by the index of the type in a type param/arg
// list. We need to do this so we can distinguish and substitute properly for two // list. We need to do this so we can distinguish and substitute properly for two
@ -1442,6 +1500,30 @@ func genericTypeName(sym *types.Sym) string {
// instantiation. // instantiation.
func Shapify(t *types.Type, index int, tparam *types.Type) *types.Type { func Shapify(t *types.Type, index int, tparam *types.Type) *types.Type {
assert(!t.IsShape()) assert(!t.IsShape())
if t.HasShape() {
// We are sometimes dealing with types from a shape instantiation
// that were constructed from existing shape types, so t may
// sometimes have shape types inside it. In that case, we find all
// those shape types with getShapes() and replace them with their
// underlying type.
//
// If we don't do this, we may create extra unneeded shape types that
// have these other shape types embedded in them. This may lead to
// generating extra shape instantiations, and a mismatch between the
// instantiations that we used in generating dictionaries and the
// instantations that are actually called. (#51303).
list := []*types.Type{}
getShapes(t, &list)
list2 := make([]*types.Type, len(list))
for i, shape := range list {
list2[i] = shape.Underlying()
}
ts := Tsubster{
Tparams: list,
Targs: list2,
}
t = ts.Typ(t)
}
// Map all types with the same underlying type to the same shape. // Map all types with the same underlying type to the same shape.
u := t.Underlying() u := t.Underlying()

View file

@ -5,72 +5,11 @@
package typecheck package typecheck
import ( import (
"go/constant"
"cmd/compile/internal/base" "cmd/compile/internal/base"
"cmd/compile/internal/ir" "cmd/compile/internal/ir"
"cmd/compile/internal/types" "cmd/compile/internal/types"
) )
// tcArrayType typechecks an OTARRAY node.
func tcArrayType(n *ir.ArrayType) ir.Node {
n.Elem = typecheckNtype(n.Elem)
if n.Elem.Type() == nil {
return n
}
if n.Len == nil { // [...]T
if !n.Diag() {
n.SetDiag(true)
base.Errorf("use of [...] array outside of array literal")
}
return n
}
n.Len = indexlit(Expr(n.Len))
size := n.Len
if ir.ConstType(size) != constant.Int {
switch {
case size.Type() == nil:
// Error already reported elsewhere.
case size.Type().IsInteger() && size.Op() != ir.OLITERAL:
base.Errorf("non-constant array bound %v", size)
default:
base.Errorf("invalid array bound %v", size)
}
return n
}
v := size.Val()
if ir.ConstOverflow(v, types.Types[types.TINT]) {
base.Errorf("array bound is too large")
return n
}
if constant.Sign(v) < 0 {
base.Errorf("array bound must be non-negative")
return n
}
bound, _ := constant.Int64Val(v)
t := types.NewArray(n.Elem.Type(), bound)
n.SetOTYPE(t)
types.CheckSize(t)
return n
}
// tcChanType typechecks an OTCHAN node.
func tcChanType(n *ir.ChanType) ir.Node {
n.Elem = typecheckNtype(n.Elem)
l := n.Elem
if l.Type() == nil {
return n
}
if l.Type().NotInHeap() {
base.Errorf("chan of incomplete (or unallocatable) type not allowed")
}
n.SetOTYPE(types.NewChan(l.Type(), n.Dir))
return n
}
// tcFuncType typechecks an OTFUNC node. // tcFuncType typechecks an OTFUNC node.
func tcFuncType(n *ir.FuncType) ir.Node { func tcFuncType(n *ir.FuncType) ir.Node {
misc := func(f *types.Field, nf *ir.Field) { misc := func(f *types.Field, nf *ir.Field) {
@ -97,71 +36,6 @@ func tcFuncType(n *ir.FuncType) ir.Node {
return n return n
} }
// tcInterfaceType typechecks an OTINTER node.
func tcInterfaceType(n *ir.InterfaceType) ir.Node {
if len(n.Methods) == 0 {
n.SetOTYPE(types.Types[types.TINTER])
return n
}
lno := base.Pos
methods := tcFields(n.Methods, nil)
base.Pos = lno
n.SetOTYPE(types.NewInterface(types.LocalPkg, methods, false))
return n
}
// tcMapType typechecks an OTMAP node.
func tcMapType(n *ir.MapType) ir.Node {
n.Key = typecheckNtype(n.Key)
n.Elem = typecheckNtype(n.Elem)
l := n.Key
r := n.Elem
if l.Type() == nil || r.Type() == nil {
return n
}
if l.Type().NotInHeap() {
base.Errorf("incomplete (or unallocatable) map key not allowed")
}
if r.Type().NotInHeap() {
base.Errorf("incomplete (or unallocatable) map value not allowed")
}
n.SetOTYPE(types.NewMap(l.Type(), r.Type()))
mapqueue = append(mapqueue, n) // check map keys when all types are settled
return n
}
// tcSliceType typechecks an OTSLICE node.
func tcSliceType(n *ir.SliceType) ir.Node {
n.Elem = typecheckNtype(n.Elem)
if n.Elem.Type() == nil {
return n
}
t := types.NewSlice(n.Elem.Type())
n.SetOTYPE(t)
types.CheckSize(t)
return n
}
// tcStructType typechecks an OTSTRUCT node.
func tcStructType(n *ir.StructType) ir.Node {
lno := base.Pos
fields := tcFields(n.Fields, func(f *types.Field, nf *ir.Field) {
if nf.Embedded {
checkembeddedtype(f.Type)
f.Embedded = 1
}
f.Note = nf.Note
})
checkdupfields("field", fields)
base.Pos = lno
n.SetOTYPE(types.NewStruct(types.LocalPkg, fields))
return n
}
// tcField typechecks a generic Field. // tcField typechecks a generic Field.
// misc can be provided to handle specialized typechecking. // misc can be provided to handle specialized typechecking.
func tcField(n *ir.Field, misc func(*types.Field, *ir.Field)) *types.Field { func tcField(n *ir.Field, misc func(*types.Field, *ir.Field)) *types.Field {

View file

@ -145,13 +145,6 @@ func Resolve(n ir.Node) (res ir.Node) {
} }
if sym := n.Sym(); sym.Pkg != types.LocalPkg { if sym := n.Sym(); sym.Pkg != types.LocalPkg {
// We might have an ir.Ident from oldname or importDot.
if id, ok := n.(*ir.Ident); ok {
if pkgName := DotImportRefs[id]; pkgName != nil {
pkgName.Used = true
}
}
return expandDecl(n) return expandDecl(n)
} }
@ -297,7 +290,7 @@ func typecheck(n ir.Node, top int) (res ir.Node) {
// But re-typecheck ONAME/OTYPE/OLITERAL/OPACK node in case context has changed. // But re-typecheck ONAME/OTYPE/OLITERAL/OPACK node in case context has changed.
if n.Typecheck() == 1 || n.Typecheck() == 3 { if n.Typecheck() == 1 || n.Typecheck() == 3 {
switch n.Op() { switch n.Op() {
case ir.ONAME, ir.OTYPE, ir.OLITERAL, ir.OPACK: case ir.ONAME, ir.OTYPE, ir.OLITERAL:
break break
default: default:
@ -529,43 +522,14 @@ func typecheck1(n ir.Node, top int) ir.Node {
// type already set // type already set
return n return n
case ir.OPACK:
n := n.(*ir.PkgName)
base.Errorf("use of package %v without selector", n.Sym())
n.SetDiag(true)
return n
// types (ODEREF is with exprs) // types (ODEREF is with exprs)
case ir.OTYPE: case ir.OTYPE:
return n return n
case ir.OTSLICE:
n := n.(*ir.SliceType)
return tcSliceType(n)
case ir.OTARRAY:
n := n.(*ir.ArrayType)
return tcArrayType(n)
case ir.OTMAP:
n := n.(*ir.MapType)
return tcMapType(n)
case ir.OTCHAN:
n := n.(*ir.ChanType)
return tcChanType(n)
case ir.OTSTRUCT:
n := n.(*ir.StructType)
return tcStructType(n)
case ir.OTINTER:
n := n.(*ir.InterfaceType)
return tcInterfaceType(n)
case ir.OTFUNC: case ir.OTFUNC:
n := n.(*ir.FuncType) n := n.(*ir.FuncType)
return tcFuncType(n) return tcFuncType(n)
// type or expr // type or expr
case ir.ODEREF: case ir.ODEREF:
n := n.(*ir.StarExpr) n := n.(*ir.StarExpr)
@ -1729,18 +1693,6 @@ func stringtoruneslit(n *ir.ConvExpr) ir.Node {
return Expr(nn) return Expr(nn)
} }
var mapqueue []*ir.MapType
func CheckMapKeys() {
for _, n := range mapqueue {
k := n.Type().MapType().Key
if !k.Broke() && !types.IsComparable(k) {
base.ErrorfAt(n.Pos(), "invalid map key type %v", k)
}
}
mapqueue = nil
}
func typecheckdeftype(n *ir.Name) { func typecheckdeftype(n *ir.Name) {
if base.EnableTrace && base.Flag.LowerT { if base.EnableTrace && base.Flag.LowerT {
defer tracePrint("typecheckdeftype", n)(nil) defer tracePrint("typecheckdeftype", n)(nil)

View file

@ -72,6 +72,7 @@ const (
fmtDebug fmtDebug
fmtTypeID fmtTypeID
fmtTypeIDName fmtTypeIDName
fmtTypeIDHash
) )
// Sym // Sym
@ -144,10 +145,21 @@ func symfmt(b *bytes.Buffer, s *Sym, verb rune, mode fmtMode) {
if q := pkgqual(s.Pkg, verb, mode); q != "" { if q := pkgqual(s.Pkg, verb, mode); q != "" {
b.WriteString(q) b.WriteString(q)
b.WriteByte('.') b.WriteByte('.')
if mode == fmtTypeIDName { switch mode {
case fmtTypeIDName:
// If name is a generic instantiation, it might have local package placeholders // If name is a generic instantiation, it might have local package placeholders
// in it. Replace those placeholders with the package name. See issue 49547. // in it. Replace those placeholders with the package name. See issue 49547.
name = strings.Replace(name, LocalPkg.Prefix, q, -1) name = strings.Replace(name, LocalPkg.Prefix, q, -1)
case fmtTypeIDHash:
// If name is a generic instantiation, don't hash the instantiating types.
// This isn't great, but it is safe. If we hash the instantiating types, then
// we need to make sure they have just the package name. At this point, they
// either have "", or the whole package path, and it is hard to reconcile
// the two without depending on -p (which we might do someday).
// See issue 51250.
if i := strings.Index(name, "["); i >= 0 {
name = name[:i]
}
} }
} }
b.WriteString(name) b.WriteString(name)
@ -157,6 +169,9 @@ func symfmt(b *bytes.Buffer, s *Sym, verb rune, mode fmtMode) {
// symbols from the given package in the given mode. // symbols from the given package in the given mode.
// If it returns the empty string, no qualification is needed. // If it returns the empty string, no qualification is needed.
func pkgqual(pkg *Pkg, verb rune, mode fmtMode) string { func pkgqual(pkg *Pkg, verb rune, mode fmtMode) string {
if pkg == nil {
return ""
}
if verb != 'S' { if verb != 'S' {
switch mode { switch mode {
case fmtGo: // This is for the user case fmtGo: // This is for the user
@ -173,7 +188,7 @@ func pkgqual(pkg *Pkg, verb rune, mode fmtMode) string {
case fmtDebug: case fmtDebug:
return pkg.Name return pkg.Name
case fmtTypeIDName: case fmtTypeIDName, fmtTypeIDHash:
// dcommontype, typehash // dcommontype, typehash
return pkg.Name return pkg.Name
@ -331,7 +346,7 @@ func tconv2(b *bytes.Buffer, t *Type, verb rune, mode fmtMode, visited map[*Type
if t == AnyType || t == ByteType || t == RuneType { if t == AnyType || t == ByteType || t == RuneType {
// in %-T mode collapse predeclared aliases with their originals. // in %-T mode collapse predeclared aliases with their originals.
switch mode { switch mode {
case fmtTypeIDName, fmtTypeID: case fmtTypeIDName, fmtTypeIDHash, fmtTypeID:
t = Types[t.Kind()] t = Types[t.Kind()]
default: default:
sconv2(b, t.Sym(), 'S', mode) sconv2(b, t.Sym(), 'S', mode)
@ -422,7 +437,7 @@ func tconv2(b *bytes.Buffer, t *Type, verb rune, mode fmtMode, visited map[*Type
case TPTR: case TPTR:
b.WriteByte('*') b.WriteByte('*')
switch mode { switch mode {
case fmtTypeID, fmtTypeIDName: case fmtTypeID, fmtTypeIDName, fmtTypeIDHash:
if verb == 'S' { if verb == 'S' {
tconv2(b, t.Elem(), 'S', mode, visited) tconv2(b, t.Elem(), 'S', mode, visited)
return return
@ -484,7 +499,7 @@ func tconv2(b *bytes.Buffer, t *Type, verb rune, mode fmtMode, visited map[*Type
case IsExported(f.Sym.Name): case IsExported(f.Sym.Name):
sconv2(b, f.Sym, 'S', mode) sconv2(b, f.Sym, 'S', mode)
default: default:
if mode != fmtTypeIDName { if mode != fmtTypeIDName && mode != fmtTypeIDHash {
mode = fmtTypeID mode = fmtTypeID
} }
sconv2(b, f.Sym, 'v', mode) sconv2(b, f.Sym, 'v', mode)
@ -554,7 +569,7 @@ func tconv2(b *bytes.Buffer, t *Type, verb rune, mode fmtMode, visited map[*Type
b.WriteByte(byte(open)) b.WriteByte(byte(open))
fieldVerb := 'v' fieldVerb := 'v'
switch mode { switch mode {
case fmtTypeID, fmtTypeIDName, fmtGo: case fmtTypeID, fmtTypeIDName, fmtTypeIDHash, fmtGo:
// no argument names on function signature, and no "noescape"/"nosplit" tags // no argument names on function signature, and no "noescape"/"nosplit" tags
fieldVerb = 'S' fieldVerb = 'S'
} }
@ -657,7 +672,7 @@ func fldconv(b *bytes.Buffer, f *Field, verb rune, mode fmtMode, visited map[*Ty
// Compute tsym, the symbol that would normally be used as // Compute tsym, the symbol that would normally be used as
// the field name when embedding f.Type. // the field name when embedding f.Type.
// TODO(mdempsky): Check for other occurences of this logic // TODO(mdempsky): Check for other occurrences of this logic
// and deduplicate. // and deduplicate.
typ := f.Type typ := f.Type
if typ.IsPtr() { if typ.IsPtr() {
@ -688,7 +703,7 @@ func fldconv(b *bytes.Buffer, f *Field, verb rune, mode fmtMode, visited map[*Ty
if name == ".F" { if name == ".F" {
name = "F" // Hack for toolstash -cmp. name = "F" // Hack for toolstash -cmp.
} }
if !IsExported(name) && mode != fmtTypeIDName { if !IsExported(name) && mode != fmtTypeIDName && mode != fmtTypeIDHash {
name = sconv(s, 0, mode) // qualify non-exported names (used on structs, not on funarg) name = sconv(s, 0, mode) // qualify non-exported names (used on structs, not on funarg)
} }
} else { } else {
@ -756,7 +771,7 @@ func FmtConst(v constant.Value, sharp bool) string {
// TypeHash computes a hash value for type t to use in type switch statements. // TypeHash computes a hash value for type t to use in type switch statements.
func TypeHash(t *Type) uint32 { func TypeHash(t *Type) uint32 {
p := t.NameString() p := tconv(t, 0, fmtTypeIDHash)
// Using MD5 is overkill, but reduces accidental collisions. // Using MD5 is overkill, but reduces accidental collisions.
h := md5.Sum([]byte(p)) h := md5.Sum([]byte(p))

View file

@ -115,10 +115,6 @@ func InitTypes(defTypeName func(sym *Sym, typ *Type) Object) {
AnyType.SetUnderlying(NewInterface(BuiltinPkg, []*Field{}, false)) AnyType.SetUnderlying(NewInterface(BuiltinPkg, []*Field{}, false))
ResumeCheckSize() ResumeCheckSize()
if base.Flag.G == 0 {
ComparableType.Sym().Def = nil
}
Types[TUNSAFEPTR] = defBasic(TUNSAFEPTR, UnsafePkg, "Pointer") Types[TUNSAFEPTR] = defBasic(TUNSAFEPTR, UnsafePkg, "Pointer")
Types[TBLANK] = newType(TBLANK) Types[TBLANK] = newType(TBLANK)

View file

@ -421,9 +421,15 @@ func (conf *Config) Check(path string, files []*syntax.File, info *Info) (*Packa
} }
// AssertableTo reports whether a value of type V can be asserted to have type T. // AssertableTo reports whether a value of type V can be asserted to have type T.
// The behavior of AssertableTo is undefined if V is a generalized interface; i.e.,
// an interface that may only be used as a type constraint in Go code.
func AssertableTo(V *Interface, T Type) bool { func AssertableTo(V *Interface, T Type) bool {
m, _ := (*Checker)(nil).assertableTo(V, T) // Checker.newAssertableTo suppresses errors for invalid types, so we need special
return m == nil // handling here.
if T.Underlying() == Typ[Invalid] {
return false
}
return (*Checker)(nil).newAssertableTo(V, T) == nil
} }
// AssignableTo reports whether a value of type V is assignable to a variable of type T. // AssignableTo reports whether a value of type V is assignable to a variable of type T.

View file

@ -474,52 +474,54 @@ func TestInstanceInfo(t *testing.T) {
// `func(float64)`, // `func(float64)`,
// }, // },
{`package s1; func f[T any, P interface{~*T}](x T) {}; func _(x string) { f(x) }`, {`package s1; func f[T any, P interface{*T}](x T) {}; func _(x string) { f(x) }`,
`f`, `f`,
[]string{`string`, `*string`}, []string{`string`, `*string`},
`func(x string)`, `func(x string)`,
}, },
{`package s2; func f[T any, P interface{~*T}](x []T) {}; func _(x []int) { f(x) }`, {`package s2; func f[T any, P interface{*T}](x []T) {}; func _(x []int) { f(x) }`,
`f`, `f`,
[]string{`int`, `*int`}, []string{`int`, `*int`},
`func(x []int)`, `func(x []int)`,
}, },
{`package s3; type C[T any] interface{~chan<- T}; func f[T any, P C[T]](x []T) {}; func _(x []int) { f(x) }`, {`package s3; type C[T any] interface{chan<- T}; func f[T any, P C[T]](x []T) {}; func _(x []int) { f(x) }`,
`f`, `f`,
[]string{`int`, `chan<- int`}, []string{`int`, `chan<- int`},
`func(x []int)`, `func(x []int)`,
}, },
{`package s4; type C[T any] interface{~chan<- T}; func f[T any, P C[T], Q C[[]*P]](x []T) {}; func _(x []int) { f(x) }`, {`package s4; type C[T any] interface{chan<- T}; func f[T any, P C[T], Q C[[]*P]](x []T) {}; func _(x []int) { f(x) }`,
`f`, `f`,
[]string{`int`, `chan<- int`, `chan<- []*chan<- int`}, []string{`int`, `chan<- int`, `chan<- []*chan<- int`},
`func(x []int)`, `func(x []int)`,
}, },
{`package t1; func f[T any, P interface{~*T}]() T { panic(0) }; func _() { _ = f[string] }`, {`package t1; func f[T any, P interface{*T}]() T { panic(0) }; func _() { _ = f[string] }`,
`f`, `f`,
[]string{`string`, `*string`}, []string{`string`, `*string`},
`func() string`, `func() string`,
}, },
{`package t2; func f[T any, P interface{~*T}]() T { panic(0) }; func _() { _ = (f[string]) }`, {`package t2; func f[T any, P interface{*T}]() T { panic(0) }; func _() { _ = (f[string]) }`,
`f`, `f`,
[]string{`string`, `*string`}, []string{`string`, `*string`},
`func() string`, `func() string`,
}, },
{`package t3; type C[T any] interface{~chan<- T}; func f[T any, P C[T], Q C[[]*P]]() []T { return nil }; func _() { _ = f[int] }`, {`package t3; type C[T any] interface{chan<- T}; func f[T any, P C[T], Q C[[]*P]]() []T { return nil }; func _() { _ = f[int] }`,
`f`, `f`,
[]string{`int`, `chan<- int`, `chan<- []*chan<- int`}, []string{`int`, `chan<- int`, `chan<- []*chan<- int`},
`func() []int`, `func() []int`,
}, },
{`package t4; type C[T any] interface{~chan<- T}; func f[T any, P C[T], Q C[[]*P]]() []T { return nil }; func _() { _ = f[int] }`, {`package t4; type C[T any] interface{chan<- T}; func f[T any, P C[T], Q C[[]*P]]() []T { return nil }; func _() { _ = (f[int]) }`,
`f`, `f`,
[]string{`int`, `chan<- int`, `chan<- []*chan<- int`}, []string{`int`, `chan<- int`, `chan<- []*chan<- int`},
`func() []int`, `func() []int`,
}, },
{`package i0; import lib "generic_lib"; func _() { lib.F(42) }`,
{`package i0; import "lib"; func _() { lib.F(42) }`,
`F`, `F`,
[]string{`int`}, []string{`int`},
`func(int)`, `func(int)`,
}, },
{`package type0; type T[P interface{~int}] struct{ x P }; var _ T[int]`, {`package type0; type T[P interface{~int}] struct{ x P }; var _ T[int]`,
`T`, `T`,
[]string{`int`}, []string{`int`},
@ -540,7 +542,7 @@ func TestInstanceInfo(t *testing.T) {
[]string{`[]int`, `int`}, []string{`[]int`, `int`},
`struct{x []int; y int}`, `struct{x []int; y int}`,
}, },
{`package type4; import lib "generic_lib"; var _ lib.T[int]`, {`package type4; import "lib"; var _ lib.T[int]`,
`T`, `T`,
[]string{`int`}, []string{`int`},
`[]int`, `[]int`,
@ -548,7 +550,7 @@ func TestInstanceInfo(t *testing.T) {
} }
for _, test := range tests { for _, test := range tests {
const lib = `package generic_lib const lib = `package lib
func F[P any](P) {} func F[P any](P) {}
@ -1697,7 +1699,7 @@ func F(){
var F = /*F=func:12*/ F /*F=var:17*/ ; _ = F var F = /*F=func:12*/ F /*F=var:17*/ ; _ = F
var a []int var a []int
for i, x := range /*i=undef*/ /*x=var:16*/ a /*i=var:20*/ /*x=var:20*/ { _ = i; _ = x } for i, x := range a /*i=undef*/ /*x=var:16*/ { _ = i; _ = x }
var i interface{} var i interface{}
switch y := i.(type) { /*y=undef*/ switch y := i.(type) { /*y=undef*/
@ -2313,27 +2315,27 @@ type Bad Bad // invalid type
conf := Config{Error: func(error) {}} conf := Config{Error: func(error) {}}
pkg, _ := conf.Check(f.PkgName.Value, []*syntax.File{f}, nil) pkg, _ := conf.Check(f.PkgName.Value, []*syntax.File{f}, nil)
scope := pkg.Scope() lookup := func(tname string) Type { return pkg.Scope().Lookup(tname).Type() }
var ( var (
EmptyIface = scope.Lookup("EmptyIface").Type().Underlying().(*Interface) EmptyIface = lookup("EmptyIface").Underlying().(*Interface)
I = scope.Lookup("I").Type().(*Named) I = lookup("I").(*Named)
II = I.Underlying().(*Interface) II = I.Underlying().(*Interface)
C = scope.Lookup("C").Type().(*Named) C = lookup("C").(*Named)
CI = C.Underlying().(*Interface) CI = C.Underlying().(*Interface)
Integer = scope.Lookup("Integer").Type().Underlying().(*Interface) Integer = lookup("Integer").Underlying().(*Interface)
EmptyTypeSet = scope.Lookup("EmptyTypeSet").Type().Underlying().(*Interface) EmptyTypeSet = lookup("EmptyTypeSet").Underlying().(*Interface)
N1 = scope.Lookup("N1").Type() N1 = lookup("N1")
N1p = NewPointer(N1) N1p = NewPointer(N1)
N2 = scope.Lookup("N2").Type() N2 = lookup("N2")
N2p = NewPointer(N2) N2p = NewPointer(N2)
N3 = scope.Lookup("N3").Type() N3 = lookup("N3")
N4 = scope.Lookup("N4").Type() N4 = lookup("N4")
Bad = scope.Lookup("Bad").Type() Bad = lookup("Bad")
) )
tests := []struct { tests := []struct {
t Type V Type
i *Interface T *Interface
want bool want bool
}{ }{
{I, II, true}, {I, II, true},
@ -2364,8 +2366,20 @@ type Bad Bad // invalid type
} }
for _, test := range tests { for _, test := range tests {
if got := Implements(test.t, test.i); got != test.want { if got := Implements(test.V, test.T); got != test.want {
t.Errorf("Implements(%s, %s) = %t, want %t", test.t, test.i, got, test.want) t.Errorf("Implements(%s, %s) = %t, want %t", test.V, test.T, got, test.want)
}
// The type assertion x.(T) is valid if T is an interface or if T implements the type of x.
// The assertion is never valid if T is a bad type.
V := test.T
T := test.V
want := false
if _, ok := T.Underlying().(*Interface); (ok || Implements(T, V)) && T != Bad {
want = true
}
if got := AssertableTo(V, T); got != want {
t.Errorf("AssertableTo(%s, %s) = %t, want %t", V, T, got, want)
} }
} }
} }

View file

@ -294,15 +294,14 @@ func (check *Checker) typesSummary(list []Type, variadic bool) string {
return "(" + strings.Join(res, ", ") + ")" return "(" + strings.Join(res, ", ") + ")"
} }
func (check *Checker) assignError(rhs []syntax.Expr, nvars, nvals int) { func measure(x int, unit string) string {
measure := func(x int, unit string) string {
s := fmt.Sprintf("%d %s", x, unit)
if x != 1 { if x != 1 {
s += "s" unit += "s"
}
return s
} }
return fmt.Sprintf("%d %s", x, unit)
}
func (check *Checker) assignError(rhs []syntax.Expr, nvars, nvals int) {
vars := measure(nvars, "variable") vars := measure(nvars, "variable")
vals := measure(nvals, "value") vals := measure(nvals, "value")
rhs0 := rhs[0] rhs0 := rhs[0]

View file

@ -82,10 +82,24 @@ func (check *Checker) builtin(x *operand, call *syntax.CallExpr, id builtinId) (
// of S and the respective parameter passing rules apply." // of S and the respective parameter passing rules apply."
S := x.typ S := x.typ
var T Type var T Type
if s, _ := structuralType(S).(*Slice); s != nil { if s, _ := coreType(S).(*Slice); s != nil {
T = s.elem T = s.elem
} else { } else {
check.errorf(x, invalidArg+"%s is not a slice", x) var cause string
switch {
case x.isNil():
cause = "have untyped nil"
case isTypeParam(S):
if u := coreType(S); u != nil {
cause = check.sprintf("%s has core type %s", x, u)
} else {
cause = check.sprintf("%s has no core type", x)
}
default:
cause = check.sprintf("have %s", x)
}
// don't use invalidArg prefix here as it would repeat "argument" in the error message
check.errorf(x, "first argument to append must be a slice; %s", cause)
return return
} }
@ -101,7 +115,7 @@ func (check *Checker) builtin(x *operand, call *syntax.CallExpr, id builtinId) (
if x.mode == invalid { if x.mode == invalid {
return return
} }
if t := structuralString(x.typ); t != nil && isString(t) { if t := coreString(x.typ); t != nil && isString(t) {
if check.Types != nil { if check.Types != nil {
sig := makeSig(S, S, x.typ) sig := makeSig(S, S, x.typ)
sig.variadic = true sig.variadic = true
@ -331,14 +345,14 @@ func (check *Checker) builtin(x *operand, call *syntax.CallExpr, id builtinId) (
case _Copy: case _Copy:
// copy(x, y []T) int // copy(x, y []T) int
dst, _ := structuralType(x.typ).(*Slice) dst, _ := coreType(x.typ).(*Slice)
var y operand var y operand
arg(&y, 1) arg(&y, 1)
if y.mode == invalid { if y.mode == invalid {
return return
} }
src0 := structuralString(y.typ) src0 := coreString(y.typ)
if src0 != nil && isString(src0) { if src0 != nil && isString(src0) {
src0 = NewSlice(universeByte) src0 = NewSlice(universeByte)
} }
@ -472,13 +486,13 @@ func (check *Checker) builtin(x *operand, call *syntax.CallExpr, id builtinId) (
} }
var min int // minimum number of arguments var min int // minimum number of arguments
switch structuralType(T).(type) { switch coreType(T).(type) {
case *Slice: case *Slice:
min = 2 min = 2
case *Map, *Chan: case *Map, *Chan:
min = 1 min = 1
case nil: case nil:
check.errorf(arg0, invalidArg+"cannot make %s: no structural type", arg0) check.errorf(arg0, invalidArg+"cannot make %s: no core type", arg0)
return return
default: default:
check.errorf(arg0, invalidArg+"cannot make %s; type must be slice, map, or channel", arg0) check.errorf(arg0, invalidArg+"cannot make %s; type must be slice, map, or channel", arg0)

View file

@ -168,7 +168,7 @@ func (check *Checker) callExpr(x *operand, call *syntax.CallExpr) exprKind {
cgocall := x.mode == cgofunc cgocall := x.mode == cgofunc
// a type parameter may be "called" if all types have the same signature // a type parameter may be "called" if all types have the same signature
sig, _ := structuralType(x.typ).(*Signature) sig, _ := coreType(x.typ).(*Signature)
if sig == nil { if sig == nil {
check.errorf(x, invalidOp+"cannot call non-function %s", x) check.errorf(x, invalidOp+"cannot call non-function %s", x)
x.mode = invalid x.mode = invalid
@ -525,7 +525,11 @@ func (check *Checker) selector(x *operand, e *syntax.SelectorExpr) {
} }
check.exprOrType(x, e.X, false) check.exprOrType(x, e.X, false)
if x.mode == invalid { switch x.mode {
case builtin:
check.errorf(e.Pos(), "cannot select on %s", x)
goto Error
case invalid:
goto Error goto Error
} }

View file

@ -18,19 +18,6 @@ var nopos syntax.Pos
// debugging/development support // debugging/development support
const debug = false // leave on during development const debug = false // leave on during development
// If forceStrict is set, the type-checker enforces additional
// rules not specified by the Go 1 spec, but which will
// catch guaranteed run-time errors if the respective
// code is executed. In other words, programs passing in
// strict mode are Go 1 compliant, but not all Go 1 programs
// will pass in strict mode. The additional rules are:
//
// - A type assertion x.(T) where T is an interface type
// is invalid if any (statically known) method that exists
// for both x and T have different signatures.
//
const forceStrict = false
// exprInfo stores information about an untyped expression. // exprInfo stores information about an untyped expression.
type exprInfo struct { type exprInfo struct {
isLhs bool // expression is lhs operand of a shift with delayed type-check isLhs bool // expression is lhs operand of a shift with delayed type-check
@ -139,7 +126,7 @@ type Checker struct {
untyped map[syntax.Expr]exprInfo // map of expressions without final type untyped map[syntax.Expr]exprInfo // map of expressions without final type
delayed []action // stack of delayed action segments; segments are processed in FIFO order delayed []action // stack of delayed action segments; segments are processed in FIFO order
objPath []Object // path of object dependencies during type inference (for cycle reporting) objPath []Object // path of object dependencies during type inference (for cycle reporting)
defTypes []*Named // defined types created during type checking, for final validation. cleaners []cleaner // list of types that may need a final cleanup at the end of type-checking
// environment within which the current object is type-checked (valid only // environment within which the current object is type-checked (valid only
// for the duration of type-checking a specific object) // for the duration of type-checking a specific object)
@ -218,6 +205,16 @@ func (check *Checker) pop() Object {
return obj return obj
} }
type cleaner interface {
cleanup()
}
// needsCleanup records objects/types that implement the cleanup method
// which will be called at the end of type-checking.
func (check *Checker) needsCleanup(c cleaner) {
check.cleaners = append(check.cleaners, c)
}
// NewChecker returns a new Checker instance for a given package. // NewChecker returns a new Checker instance for a given package.
// Package files may be added incrementally via checker.Files. // Package files may be added incrementally via checker.Files.
func NewChecker(conf *Config, pkg *Package, info *Info) *Checker { func NewChecker(conf *Config, pkg *Package, info *Info) *Checker {
@ -260,6 +257,8 @@ func (check *Checker) initFiles(files []*syntax.File) {
check.methods = nil check.methods = nil
check.untyped = nil check.untyped = nil
check.delayed = nil check.delayed = nil
check.objPath = nil
check.cleaners = nil
// determine package name and collect valid files // determine package name and collect valid files
pkg := check.pkg pkg := check.pkg
@ -328,8 +327,8 @@ func (check *Checker) checkFiles(files []*syntax.File) (err error) {
print("== processDelayed ==") print("== processDelayed ==")
check.processDelayed(0) // incl. all functions check.processDelayed(0) // incl. all functions
print("== expandDefTypes ==") print("== cleanup ==")
check.expandDefTypes() check.cleanup()
print("== initOrder ==") print("== initOrder ==")
check.initOrder() check.initOrder()
@ -357,7 +356,6 @@ func (check *Checker) checkFiles(files []*syntax.File) (err error) {
check.recvTParamMap = nil check.recvTParamMap = nil
check.brokenAliases = nil check.brokenAliases = nil
check.unionTypeSets = nil check.unionTypeSets = nil
check.defTypes = nil
check.ctxt = nil check.ctxt = nil
// TODO(gri) There's more memory we should release at this point. // TODO(gri) There's more memory we should release at this point.
@ -385,27 +383,13 @@ func (check *Checker) processDelayed(top int) {
check.delayed = check.delayed[:top] check.delayed = check.delayed[:top]
} }
func (check *Checker) expandDefTypes() { // cleanup runs cleanup for all collected cleaners.
// Ensure that every defined type created in the course of type-checking has func (check *Checker) cleanup() {
// either non-*Named underlying, or is unresolved. // Don't use a range clause since Named.cleanup may add more cleaners.
// for i := 0; i < len(check.cleaners); i++ {
// This guarantees that we don't leak any types whose underlying is *Named, check.cleaners[i].cleanup()
// because any unresolved instances will lazily compute their underlying by
// substituting in the underlying of their origin. The origin must have
// either been imported or type-checked and expanded here, and in either case
// its underlying will be fully expanded.
for i := 0; i < len(check.defTypes); i++ {
n := check.defTypes[i]
switch n.underlying.(type) {
case nil:
if n.resolver == nil {
panic("nil underlying")
}
case *Named:
n.under() // n.under may add entries to check.defTypes
}
n.check = nil
} }
check.cleaners = nil
} }
func (check *Checker) record(x *operand) { func (check *Checker) record(x *operand) {

View file

@ -19,12 +19,12 @@ func AsSignature(t Type) *Signature {
return u return u
} }
// If typ is a type parameter, structuralType returns the single underlying // If typ is a type parameter, CoreType returns the single underlying
// type of all types in the corresponding type constraint if it exists, or // type of all types in the corresponding type constraint if it exists, or
// nil otherwise. If the type set contains only unrestricted and restricted // nil otherwise. If the type set contains only unrestricted and restricted
// channel types (with identical element types), the single underlying type // channel types (with identical element types), the single underlying type
// is the restricted channel type if the restrictions are always the same. // is the restricted channel type if the restrictions are always the same.
// If typ is not a type parameter, structuralType returns the underlying type. // If typ is not a type parameter, CoreType returns the underlying type.
func StructuralType(t Type) Type { func CoreType(t Type) Type {
return structuralType(t) return coreType(t)
} }

View file

@ -49,11 +49,14 @@ func (check *Checker) conversion(x *operand, T Type) {
// have specific types, constant x cannot be // have specific types, constant x cannot be
// converted. // converted.
ok = T.(*TypeParam).underIs(func(u Type) bool { ok = T.(*TypeParam).underIs(func(u Type) bool {
// t is nil if there are no specific type terms // u is nil if there are no specific type terms
if u == nil { if u == nil {
cause = check.sprintf("%s does not contain specific types", T) cause = check.sprintf("%s does not contain specific types", T)
return false return false
} }
if isString(x.typ) && isBytesOrRunes(u) {
return true
}
if !constConvertibleTo(u, nil) { if !constConvertibleTo(u, nil) {
cause = check.sprintf("cannot convert %s to %s (in %s)", x, u, T) cause = check.sprintf("cannot convert %s to %s (in %s)", x, u, T)
return false return false

View file

@ -569,7 +569,6 @@ func (check *Checker) collectTypeParams(dst **TypeParamList, list []*syntax.Fiel
// Keep track of bounds for later validation. // Keep track of bounds for later validation.
var bound Type var bound Type
var bounds []Type
for i, f := range list { for i, f := range list {
// Optimization: Re-use the previous type bound if it hasn't changed. // Optimization: Re-use the previous type bound if it hasn't changed.
// This also preserves the grouped output of type parameter lists // This also preserves the grouped output of type parameter lists
@ -584,7 +583,6 @@ func (check *Checker) collectTypeParams(dst **TypeParamList, list []*syntax.Fiel
check.error(f.Type, "cannot use a type parameter as constraint") check.error(f.Type, "cannot use a type parameter as constraint")
bound = Typ[Invalid] bound = Typ[Invalid]
} }
bounds = append(bounds, bound)
} }
tparams[i].bound = bound tparams[i].bound = bound
} }

View file

@ -124,6 +124,17 @@ func sprintf(qf Qualifier, debug bool, format string, args ...interface{}) strin
} }
buf.WriteByte(']') buf.WriteByte(']')
arg = buf.String() arg = buf.String()
case []*TypeParam:
var buf bytes.Buffer
buf.WriteByte('[')
for i, x := range a {
if i > 0 {
buf.WriteString(", ")
}
buf.WriteString(typeString(x, qf, debug)) // use typeString so we get subscripts when debugging
}
buf.WriteByte(']')
arg = buf.String()
} }
args[i] = arg args[i] = arg
} }

View file

@ -182,9 +182,9 @@ func (check *Checker) unary(x *operand, e *syntax.Operation) {
return return
case syntax.Recv: case syntax.Recv:
u := structuralType(x.typ) u := coreType(x.typ)
if u == nil { if u == nil {
check.errorf(x, invalidOp+"cannot receive from %s: no structural type", x) check.errorf(x, invalidOp+"cannot receive from %s: no core type", x)
x.mode = invalid x.mode = invalid
return return
} }
@ -899,7 +899,7 @@ func (check *Checker) incomparableCause(typ Type) string {
} }
// see if we can extract a more specific error // see if we can extract a more specific error
var cause string var cause string
comparable(typ, nil, func(format string, args ...interface{}) { comparable(typ, true, nil, func(format string, args ...interface{}) {
cause = check.sprintf(format, args...) cause = check.sprintf(format, args...)
}) })
return cause return cause
@ -1359,7 +1359,11 @@ func (check *Checker) exprInternal(x *operand, e syntax.Expr, hint Type) exprKin
case hint != nil: case hint != nil:
// no composite literal type present - use hint (element type of enclosing type) // no composite literal type present - use hint (element type of enclosing type)
typ = hint typ = hint
base, _ = deref(structuralType(typ)) // *T implies &T{} base, _ = deref(coreType(typ)) // *T implies &T{}
if base == nil {
check.errorf(e, "invalid composite literal element type %s: no core type", typ)
goto Error
}
default: default:
// TODO(gri) provide better error messages depending on context // TODO(gri) provide better error messages depending on context
@ -1367,7 +1371,7 @@ func (check *Checker) exprInternal(x *operand, e syntax.Expr, hint Type) exprKin
goto Error goto Error
} }
switch utyp := structuralType(base).(type) { switch utyp := coreType(base).(type) {
case *Struct: case *Struct:
// Prevent crash if the struct referred to is not yet set up. // Prevent crash if the struct referred to is not yet set up.
// See analogous comment for *Array. // See analogous comment for *Array.

View file

@ -182,7 +182,7 @@ func (check *Checker) indexExpr(x *operand, e *syntax.IndexExpr) (isFuncInst boo
} }
if !valid { if !valid {
check.errorf(x, invalidOp+"cannot index %s", x) check.errorf(e.Pos(), invalidOp+"cannot index %s", x)
x.mode = invalid x.mode = invalid
return false return false
} }
@ -213,9 +213,9 @@ func (check *Checker) sliceExpr(x *operand, e *syntax.SliceExpr) {
valid := false valid := false
length := int64(-1) // valid if >= 0 length := int64(-1) // valid if >= 0
switch u := structuralString(x.typ).(type) { switch u := coreString(x.typ).(type) {
case nil: case nil:
check.errorf(x, invalidOp+"cannot slice %s: %s has no structural type", x, x.typ) check.errorf(x, invalidOp+"cannot slice %s: %s has no core type", x, x.typ)
x.mode = invalid x.mode = invalid
return return

View file

@ -41,6 +41,13 @@ func (check *Checker) infer(pos syntax.Pos, tparams []*TypeParam, targs []Type,
}() }()
} }
if traceInference {
check.dump("-- inferA %s%s ➞ %s", tparams, params, targs)
defer func() {
check.dump("=> inferA %s ➞ %s", tparams, result)
}()
}
// There must be at least one type parameter, and no more type arguments than type parameters. // There must be at least one type parameter, and no more type arguments than type parameters.
n := len(tparams) n := len(tparams)
assert(n > 0 && len(targs) <= n) assert(n > 0 && len(targs) <= n)
@ -54,6 +61,64 @@ func (check *Checker) infer(pos syntax.Pos, tparams []*TypeParam, targs []Type,
} }
// len(targs) < n // len(targs) < n
const enableTparamRenaming = true
if enableTparamRenaming {
// For the purpose of type inference we must differentiate type parameters
// occurring in explicit type or value function arguments from the type
// parameters we are solving for via unification, because they may be the
// same in self-recursive calls. For example:
//
// func f[P *Q, Q any](p P, q Q) {
// f(p)
// }
//
// In this example, the fact that the P used in the instantation f[P] has
// the same pointer identity as the P we are trying to solve for via
// unification is coincidental: there is nothing special about recursive
// calls that should cause them to conflate the identity of type arguments
// with type parameters. To put it another way: any such self-recursive
// call is equivalent to a mutually recursive call, which does not run into
// any problems of type parameter identity. For example, the following code
// is equivalent to the code above.
//
// func f[P interface{*Q}, Q any](p P, q Q) {
// f2(p)
// }
//
// func f2[P interface{*Q}, Q any](p P, q Q) {
// f(p)
// }
//
// We can turn the first example into the second example by renaming type
// parameters in the original signature to give them a new identity. As an
// optimization, we do this only for self-recursive calls.
// We can detect if we are in a self-recursive call by comparing the
// identity of the first type parameter in the current function with the
// first type parameter in tparams. This works because type parameters are
// unique to their type parameter list.
selfRecursive := check.sig != nil && check.sig.tparams.Len() > 0 && tparams[0] == check.sig.tparams.At(0)
if selfRecursive {
// In self-recursive inference, rename the type parameters with new type
// parameters that are the same but for their pointer identity.
tparams2 := make([]*TypeParam, len(tparams))
for i, tparam := range tparams {
tname := NewTypeName(tparam.Obj().Pos(), tparam.Obj().Pkg(), tparam.Obj().Name(), nil)
tparams2[i] = NewTypeParam(tname, nil)
tparams2[i].index = tparam.index // == i
}
renameMap := makeRenameMap(tparams, tparams2)
for i, tparam := range tparams {
tparams2[i].bound = check.subst(pos, tparam.bound, renameMap, nil)
}
tparams = tparams2
params = check.subst(pos, params, renameMap, nil).(*Tuple)
}
}
// If we have more than 2 arguments, we may have arguments with named and unnamed types. // If we have more than 2 arguments, we may have arguments with named and unnamed types.
// If that is the case, permutate params and args such that the arguments with named // If that is the case, permutate params and args such that the arguments with named
// types are first in the list. This doesn't affect type inference if all types are taken // types are first in the list. This doesn't affect type inference if all types are taken
@ -403,6 +468,13 @@ func (w *tpWalker) isParameterizedTypeList(list []Type) bool {
func (check *Checker) inferB(pos syntax.Pos, tparams []*TypeParam, targs []Type) (types []Type, index int) { func (check *Checker) inferB(pos syntax.Pos, tparams []*TypeParam, targs []Type) (types []Type, index int) {
assert(len(tparams) >= len(targs) && len(targs) > 0) assert(len(tparams) >= len(targs) && len(targs) > 0)
if traceInference {
check.dump("-- inferB %s ➞ %s", tparams, targs)
defer func() {
check.dump("=> inferB %s ➞ %s", tparams, types)
}()
}
// Setup bidirectional unification between constraints // Setup bidirectional unification between constraints
// and the corresponding type arguments (which may be nil!). // and the corresponding type arguments (which may be nil!).
u := newUnifier(false) u := newUnifier(false)
@ -416,27 +488,88 @@ func (check *Checker) inferB(pos syntax.Pos, tparams []*TypeParam, targs []Type)
} }
} }
// If a constraint has a structural type, unify the corresponding type parameter with it. // Repeatedly apply constraint type inference as long as
for _, tpar := range tparams { // there are still unknown type arguments and progress is
sbound := structuralType(tpar) // being made.
if sbound != nil { //
// If the structural type is the underlying type of a single // This is an O(n^2) algorithm where n is the number of
// defined type in the constraint, use that defined type instead. // type parameters: if there is progress (and iteration
if named, _ := tpar.singleType().(*Named); named != nil { // continues), at least one type argument is inferred
sbound = named // per iteration and we have a doubly nested loop.
// In practice this is not a problem because the number
// of type parameters tends to be very small (< 5 or so).
// (It should be possible for unification to efficiently
// signal newly inferred type arguments; then the loops
// here could handle the respective type parameters only,
// but that will come at a cost of extra complexity which
// may not be worth it.)
for n := u.x.unknowns(); n > 0; {
nn := n
for i, tpar := range tparams {
// If there is a core term (i.e., a core type with tilde information)
// unify the type parameter with the core type.
if core, single := coreTerm(tpar); core != nil {
// A type parameter can be unified with its core type in two cases.
tx := u.x.at(i)
switch {
case tx != nil:
// The corresponding type argument tx is known.
// In this case, if the core type has a tilde, the type argument's underlying
// type must match the core type, otherwise the type argument and the core type
// must match.
// If tx is an external type parameter, don't consider its underlying type
// (which is an interface). Core type unification will attempt to unify against
// core.typ.
// Note also that even with inexact unification we cannot leave away the under
// call here because it's possible that both tx and core.typ are named types,
// with under(tx) being a (named) basic type matching core.typ. Such cases do
// not match with inexact unification.
if core.tilde && !isTypeParam(tx) {
tx = under(tx)
} }
if !u.unify(tpar, sbound) { if !u.unify(tx, core.typ) {
// TODO(gri) improve error message by providing the type arguments // TODO(gri) improve error message by providing the type arguments
// which we know already // which we know already
check.errorf(pos, "%s does not match %s", tpar, sbound) // Don't use term.String() as it always qualifies types, even if they
// are in the current package.
tilde := ""
if core.tilde {
tilde = "~"
}
check.errorf(pos, "%s does not match %s%s", tpar, tilde, core.typ)
return nil, 0 return nil, 0
} }
case single && !core.tilde:
// The corresponding type argument tx is unknown and there's a single
// specific type and no tilde.
// In this case the type argument must be that single type; set it.
u.x.set(i, core.typ)
default:
// Unification is not possible and no progress was made.
continue
}
// The number of known type arguments may have changed.
nn = u.x.unknowns()
if nn == 0 {
break // all type arguments are known
}
} }
} }
assert(nn <= n)
if nn == n {
break // no progress
}
n = nn
}
// u.x.types() now contains the incoming type arguments plus any additional type // u.x.types() now contains the incoming type arguments plus any additional type
// arguments which were inferred from structural types. The newly inferred non- // arguments which were inferred from core terms. The newly inferred non-nil
// nil entries may still contain references to other type parameters. // entries may still contain references to other type parameters.
// For instance, for [A any, B interface{ []C }, C interface{ *A }], if A == int // For instance, for [A any, B interface{ []C }, C interface{ *A }], if A == int
// was given, unification produced the type list [int, []C, *A]. We eliminate the // was given, unification produced the type list [int, []C, *A]. We eliminate the
// remaining type parameters by substituting the type parameters in this type list // remaining type parameters by substituting the type parameters in this type list
@ -504,8 +637,8 @@ func (check *Checker) inferB(pos syntax.Pos, tparams []*TypeParam, targs []Type)
} }
// Once nothing changes anymore, we may still have type parameters left; // Once nothing changes anymore, we may still have type parameters left;
// e.g., a structural constraint *P may match a type parameter Q but we // e.g., a constraint with core type *P may match a type parameter Q but
// don't have any type arguments to fill in for *P or Q (issue #45548). // we don't have any type arguments to fill in for *P or Q (issue #45548).
// Don't let such inferences escape, instead nil them out. // Don't let such inferences escape, instead nil them out.
for i, typ := range types { for i, typ := range types {
if typ != nil && isParameterized(tparams, typ) { if typ != nil && isParameterized(tparams, typ) {
@ -525,6 +658,42 @@ func (check *Checker) inferB(pos syntax.Pos, tparams []*TypeParam, targs []Type)
return return
} }
// If the type parameter has a single specific type S, coreTerm returns (S, true).
// Otherwise, if tpar has a core type T, it returns a term corresponding to that
// core type and false. In that case, if any term of tpar has a tilde, the core
// term has a tilde. In all other cases coreTerm returns (nil, false).
func coreTerm(tpar *TypeParam) (*term, bool) {
n := 0
var single *term // valid if n == 1
var tilde bool
tpar.is(func(t *term) bool {
if t == nil {
assert(n == 0)
return false // no terms
}
n++
single = t
if t.tilde {
tilde = true
}
return true
})
if n == 1 {
if debug {
assert(debug && under(single.typ) == coreType(tpar))
}
return single, true
}
if typ := coreType(tpar); typ != nil {
// A core type is always an underlying type.
// If any term of tpar has a tilde, we don't
// have a precise core type and we must return
// a tilde as well.
return &term{tilde, typ}, false
}
return nil, false
}
type cycleFinder struct { type cycleFinder struct {
tparams []*TypeParam tparams []*TypeParam
types []Type types []Type

View file

@ -204,7 +204,7 @@ func (check *Checker) implements(V, T Type) error {
// If T is comparable, V must be comparable. // If T is comparable, V must be comparable.
// Remember as a pending error and report only if we don't have a more specific error. // Remember as a pending error and report only if we don't have a more specific error.
var pending error var pending error
if Ti.IsComparable() && ((Vi != nil && !Vi.IsComparable()) || (Vi == nil && !Comparable(V))) { if Ti.IsComparable() && !comparable(V, false, nil, nil) {
pending = errorf("%s does not implement comparable", V) pending = errorf("%s does not implement comparable", V)
} }

View file

@ -37,7 +37,7 @@ func NewInterfaceType(methods []*Func, embeddeds []Type) *Interface {
} }
// set method receivers if necessary // set method receivers if necessary
typ := new(Interface) typ := (*Checker)(nil).newInterface()
for _, m := range methods { for _, m := range methods {
if sig := m.typ.(*Signature); sig.recv == nil { if sig := m.typ.(*Signature); sig.recv == nil {
sig.recv = NewVar(m.pos, m.pkg, "", typ) sig.recv = NewVar(m.pos, m.pkg, "", typ)
@ -54,6 +54,15 @@ func NewInterfaceType(methods []*Func, embeddeds []Type) *Interface {
return typ return typ
} }
// check may be nil
func (check *Checker) newInterface() *Interface {
typ := &Interface{check: check}
if check != nil {
check.needsCleanup(typ)
}
return typ
}
// MarkImplicit marks the interface t as implicit, meaning this interface // MarkImplicit marks the interface t as implicit, meaning this interface
// corresponds to a constraint literal such as ~T or A|B without explicit // corresponds to a constraint literal such as ~T or A|B without explicit
// interface embedding. MarkImplicit should be called before any concurrent use // interface embedding. MarkImplicit should be called before any concurrent use
@ -100,6 +109,11 @@ func (t *Interface) String() string { return TypeString(t, nil) }
// ---------------------------------------------------------------------------- // ----------------------------------------------------------------------------
// Implementation // Implementation
func (t *Interface) cleanup() {
t.check = nil
t.embedPos = nil
}
func (check *Checker) interfaceType(ityp *Interface, iface *syntax.InterfaceType, def *Named) { func (check *Checker) interfaceType(ityp *Interface, iface *syntax.InterfaceType, def *Named) {
addEmbedded := func(pos syntax.Pos, typ Type) { addEmbedded := func(pos syntax.Pos, typ Type) {
ityp.embeddeds = append(ityp.embeddeds, typ) ityp.embeddeds = append(ityp.embeddeds, typ)
@ -162,16 +176,10 @@ func (check *Checker) interfaceType(ityp *Interface, iface *syntax.InterfaceType
// (don't sort embeddeds: they must correspond to *embedPos entries) // (don't sort embeddeds: they must correspond to *embedPos entries)
sortMethods(ityp.methods) sortMethods(ityp.methods)
// Compute type set with a non-nil *Checker as soon as possible // Compute type set as soon as possible to report any errors.
// to report any errors. Subsequent uses of type sets will use // Subsequent uses of type sets will use this computed type
// this computed type set and won't need to pass in a *Checker. // set and won't need to pass in a *Checker.
//
// Pin the checker to the interface type in the interim, in case the type set
// must be used before delayed funcs are processed (see issue #48234).
// TODO(rfindley): clean up use of *Checker with computeInterfaceTypeSet
ityp.check = check
check.later(func() { check.later(func() {
computeInterfaceTypeSet(check, iface.Pos(), ityp) computeInterfaceTypeSet(check, iface.Pos(), ityp)
ityp.check = nil
}).describef(iface, "compute type set for %s", ityp) }).describef(iface, "compute type set for %s", ityp)
} }

View file

@ -66,12 +66,12 @@ func LookupFieldOrMethod(T Type, addressable bool, pkg *Package, name string) (o
obj, index, indirect = lookupFieldOrMethod(T, addressable, pkg, name, false) obj, index, indirect = lookupFieldOrMethod(T, addressable, pkg, name, false)
// If we didn't find anything and if we have a type parameter with a structural constraint, // If we didn't find anything and if we have a type parameter with a core type,
// see if there is a matching field (but not a method, those need to be declared explicitly // see if there is a matching field (but not a method, those need to be declared
// in the constraint). If the structural constraint is a named pointer type (see above), we // explicitly in the constraint). If the constraint is a named pointer type (see
// are ok here because only fields are accepted as results. // above), we are ok here because only fields are accepted as results.
if obj == nil && isTypeParam(T) { if obj == nil && isTypeParam(T) {
if t := structuralType(T); t != nil { if t := coreType(T); t != nil {
obj, index, indirect = lookupFieldOrMethod(t, addressable, pkg, name, false) obj, index, indirect = lookupFieldOrMethod(t, addressable, pkg, name, false)
if _, ok := obj.(*Var); !ok { if _, ok := obj.(*Var); !ok {
obj, index, indirect = nil, nil, false // accept fields (variables) only obj, index, indirect = nil, nil, false // accept fields (variables) only
@ -425,18 +425,31 @@ func (check *Checker) funcString(f *Func) string {
// method required by V and whether it is missing or just has the wrong type. // method required by V and whether it is missing or just has the wrong type.
// The receiver may be nil if assertableTo is invoked through an exported API call // The receiver may be nil if assertableTo is invoked through an exported API call
// (such as AssertableTo), i.e., when all methods have been type-checked. // (such as AssertableTo), i.e., when all methods have been type-checked.
// If the global constant forceStrict is set, assertions that are known to fail // TODO(gri) replace calls to this function with calls to newAssertableTo.
// are not permitted.
func (check *Checker) assertableTo(V *Interface, T Type) (method, wrongType *Func) { func (check *Checker) assertableTo(V *Interface, T Type) (method, wrongType *Func) {
// no static check is required if T is an interface // no static check is required if T is an interface
// spec: "If T is an interface type, x.(T) asserts that the // spec: "If T is an interface type, x.(T) asserts that the
// dynamic type of x implements the interface T." // dynamic type of x implements the interface T."
if IsInterface(T) && !forceStrict { if IsInterface(T) {
return return
} }
// TODO(gri) fix this for generalized interfaces
return check.missingMethod(T, V, false) return check.missingMethod(T, V, false)
} }
// newAssertableTo reports whether a value of type V can be asserted to have type T.
// It also implements behavior for interfaces that currently are only permitted
// in constraint position (we have not yet defined that behavior in the spec).
func (check *Checker) newAssertableTo(V *Interface, T Type) error {
// no static check is required if T is an interface
// spec: "If T is an interface type, x.(T) asserts that the
// dynamic type of x implements the interface T."
if IsInterface(T) {
return nil
}
return check.implements(T, V)
}
// deref dereferences typ if it is a *Pointer and returns its base and true. // deref dereferences typ if it is a *Pointer and returns its base and true.
// Otherwise it returns (typ, false). // Otherwise it returns (typ, false).
func deref(typ Type) (Type, bool) { func deref(typ Type) (Type, bool) {

View file

@ -72,11 +72,31 @@ func (check *Checker) newNamed(obj *TypeName, orig *Named, underlying Type, tpar
} }
// Ensure that typ is always expanded and sanity-checked. // Ensure that typ is always expanded and sanity-checked.
if check != nil { if check != nil {
check.defTypes = append(check.defTypes, typ) check.needsCleanup(typ)
} }
return typ return typ
} }
func (t *Named) cleanup() {
// Ensure that every defined type created in the course of type-checking has
// either non-*Named underlying, or is unresolved.
//
// This guarantees that we don't leak any types whose underlying is *Named,
// because any unresolved instances will lazily compute their underlying by
// substituting in the underlying of their origin. The origin must have
// either been imported or type-checked and expanded here, and in either case
// its underlying will be fully expanded.
switch t.underlying.(type) {
case nil:
if t.resolver == nil {
panic("nil underlying")
}
case *Named:
t.under() // t.under may add entries to check.cleaners
}
t.check = nil
}
// Obj returns the type name for the declaration defining the named type t. For // Obj returns the type name for the declaration defining the named type t. For
// instantiated types, this is the type name of the base type. // instantiated types, this is the type name of the base type.
func (t *Named) Obj() *TypeName { return t.orig.obj } // for non-instances this is the same as t.obj func (t *Named) Obj() *TypeName { return t.orig.obj } // for non-instances this is the same as t.obj
@ -360,11 +380,11 @@ func expandNamed(ctxt *Context, n *Named, instPos syntax.Pos) (tparams *TypePara
// that it wasn't substituted. In this case we need to create a new // that it wasn't substituted. In this case we need to create a new
// *Interface before modifying receivers. // *Interface before modifying receivers.
if iface == n.orig.underlying { if iface == n.orig.underlying {
iface = &Interface{ old := iface
embeddeds: iface.embeddeds, iface = check.newInterface()
complete: iface.complete, iface.embeddeds = old.embeddeds
implicit: iface.implicit, // should be false but be conservative iface.complete = old.complete
} iface.implicit = old.implicit // should be false but be conservative
underlying = iface underlying = iface
} }
iface.methods = methods iface.methods = methods

View file

@ -31,7 +31,7 @@ func isBasic(t Type, info BasicInfo) bool {
// The allX predicates below report whether t is an X. // The allX predicates below report whether t is an X.
// If t is a type parameter the result is true if isX is true // If t is a type parameter the result is true if isX is true
// for all specified types of the type parameter's type set. // for all specified types of the type parameter's type set.
// allX is an optimized version of isX(structuralType(t)) (which // allX is an optimized version of isX(coreType(t)) (which
// is the same as underIs(t, isX)). // is the same as underIs(t, isX)).
func allBoolean(t Type) bool { return allBasic(t, IsBoolean) } func allBoolean(t Type) bool { return allBasic(t, IsBoolean) }
@ -45,7 +45,7 @@ func allNumericOrString(t Type) bool { return allBasic(t, IsNumeric|IsString) }
// allBasic reports whether under(t) is a basic type with the specified info. // allBasic reports whether under(t) is a basic type with the specified info.
// If t is a type parameter, the result is true if isBasic(t, info) is true // If t is a type parameter, the result is true if isBasic(t, info) is true
// for all specific types of the type parameter's type set. // for all specific types of the type parameter's type set.
// allBasic(t, info) is an optimized version of isBasic(structuralType(t), info). // allBasic(t, info) is an optimized version of isBasic(coreType(t), info).
func allBasic(t Type, info BasicInfo) bool { func allBasic(t Type, info BasicInfo) bool {
if tpar, _ := t.(*TypeParam); tpar != nil { if tpar, _ := t.(*TypeParam); tpar != nil {
return tpar.is(func(t *term) bool { return t != nil && isBasic(t.typ, info) }) return tpar.is(func(t *term) bool { return t != nil && isBasic(t.typ, info) })
@ -102,11 +102,12 @@ func isGeneric(t Type) bool {
// Comparable reports whether values of type T are comparable. // Comparable reports whether values of type T are comparable.
func Comparable(T Type) bool { func Comparable(T Type) bool {
return comparable(T, nil, nil) return comparable(T, true, nil, nil)
} }
// If dynamic is set, non-type parameter interfaces are always comparable.
// If reportf != nil, it may be used to report why T is not comparable. // If reportf != nil, it may be used to report why T is not comparable.
func comparable(T Type, seen map[Type]bool, reportf func(string, ...interface{})) bool { func comparable(T Type, dynamic bool, seen map[Type]bool, reportf func(string, ...interface{})) bool {
if seen[T] { if seen[T] {
return true return true
} }
@ -124,7 +125,7 @@ func comparable(T Type, seen map[Type]bool, reportf func(string, ...interface{})
return true return true
case *Struct: case *Struct:
for _, f := range t.fields { for _, f := range t.fields {
if !comparable(f.typ, seen, nil) { if !comparable(f.typ, dynamic, seen, nil) {
if reportf != nil { if reportf != nil {
reportf("struct containing %s cannot be compared", f.typ) reportf("struct containing %s cannot be compared", f.typ)
} }
@ -133,7 +134,7 @@ func comparable(T Type, seen map[Type]bool, reportf func(string, ...interface{})
} }
return true return true
case *Array: case *Array:
if !comparable(t.elem, seen, nil) { if !comparable(t.elem, dynamic, seen, nil) {
if reportf != nil { if reportf != nil {
reportf("%s cannot be compared", t) reportf("%s cannot be compared", t)
} }
@ -141,7 +142,7 @@ func comparable(T Type, seen map[Type]bool, reportf func(string, ...interface{})
} }
return true return true
case *Interface: case *Interface:
return !isTypeParam(T) || t.typeSet().IsComparable(seen) return dynamic && !isTypeParam(T) || t.typeSet().IsComparable(seen)
} }
return false return false
} }

View file

@ -116,11 +116,10 @@ func (check *Checker) funcType(sig *Signature, recvPar *syntax.Field, tparams []
// lookup in the scope. // lookup in the scope.
for i, p := range rparams { for i, p := range rparams {
if p.Value == "_" { if p.Value == "_" {
tpar := sig.rparams.At(i)
if check.recvTParamMap == nil { if check.recvTParamMap == nil {
check.recvTParamMap = make(map[*syntax.Name]*TypeParam) check.recvTParamMap = make(map[*syntax.Name]*TypeParam)
} }
check.recvTParamMap[p] = tpar check.recvTParamMap[p] = tparams[i]
} }
} }
// determine receiver type to get its type parameters // determine receiver type to get its type parameters
@ -136,22 +135,23 @@ func (check *Checker) funcType(sig *Signature, recvPar *syntax.Field, tparams []
} }
} }
// provide type parameter bounds // provide type parameter bounds
// - only do this if we have the right number (otherwise an error is reported elsewhere) if len(tparams) == len(recvTParams) {
if sig.RecvTypeParams().Len() == len(recvTParams) { smap := makeRenameMap(recvTParams, tparams)
// We have a list of *TypeNames but we need a list of Types. for i, tpar := range tparams {
list := make([]Type, sig.RecvTypeParams().Len()) recvTPar := recvTParams[i]
for i, t := range sig.RecvTypeParams().list() { check.mono.recordCanon(tpar, recvTPar)
list[i] = t // recvTPar.bound is (possibly) parameterized in the context of the
check.mono.recordCanon(t, recvTParams[i]) // receiver type declaration. Substitute parameters for the current
} // context.
smap := makeSubstMap(recvTParams, list) tpar.bound = check.subst(tpar.obj.pos, recvTPar.bound, smap, nil)
for i, tpar := range sig.RecvTypeParams().list() {
bound := recvTParams[i].bound
// bound is (possibly) parameterized in the context of the
// receiver type declaration. Substitute parameters for the
// current context.
tpar.bound = check.subst(tpar.obj.pos, bound, smap, nil)
} }
} else if len(tparams) < len(recvTParams) {
// Reporting an error here is a stop-gap measure to avoid crashes in the
// compiler when a type parameter/argument cannot be inferred later. It
// may lead to follow-on errors (see issues #51339, #51343).
// TODO(gri) find a better solution
got := measure(len(tparams), "type parameter")
check.errorf(recvPar, "got %s, but receiver base type declares %d", got, len(recvTParams))
} }
} }
} }
@ -194,9 +194,11 @@ func (check *Checker) funcType(sig *Signature, recvPar *syntax.Field, tparams []
case 1: case 1:
recv = recvList[0] recv = recvList[0]
} }
sig.recv = recv
// TODO(gri) We should delay rtyp expansion to when we actually need the // Delay validation of receiver type as it may cause premature expansion
// receiver; thus all checks here should be delayed to later. // of types the receiver type is dependent on (see issues #51232, #51233).
check.later(func() {
rtyp, _ := deref(recv.typ) rtyp, _ := deref(recv.typ)
// spec: "The receiver type must be of the form T or *T where T is a type name." // spec: "The receiver type must be of the form T or *T where T is a type name."
@ -224,6 +226,8 @@ func (check *Checker) funcType(sig *Signature, recvPar *syntax.Field, tparams []
} else { } else {
// The underlying type of a receiver base type can be a type parameter; // The underlying type of a receiver base type can be a type parameter;
// e.g. for methods with a generic receiver T[P] with type T[P any] P. // e.g. for methods with a generic receiver T[P] with type T[P any] P.
// TODO(gri) Such declarations are currently disallowed.
// Revisit the need for underIs.
underIs(T, func(u Type) bool { underIs(T, func(u Type) bool {
switch u := u.(type) { switch u := u.(type) {
case *Basic: case *Basic:
@ -250,10 +254,9 @@ func (check *Checker) funcType(sig *Signature, recvPar *syntax.Field, tparams []
} }
if err != "" { if err != "" {
check.errorf(recv.pos, "invalid receiver type %s (%s)", recv.typ, err) check.errorf(recv.pos, "invalid receiver type %s (%s)", recv.typ, err)
// ok to continue
} }
} }
sig.recv = recv }).describef(recv, "validate receiver %s", recv)
} }
sig.params = NewTuple(params...) sig.params = NewTuple(params...)

View file

@ -409,9 +409,9 @@ func (check *Checker) stmt(ctxt stmtContext, s syntax.Stmt) {
if ch.mode == invalid || val.mode == invalid { if ch.mode == invalid || val.mode == invalid {
return return
} }
u := structuralType(ch.typ) u := coreType(ch.typ)
if u == nil { if u == nil {
check.errorf(s, invalidOp+"cannot send to %s: no structural type", &ch) check.errorf(s, invalidOp+"cannot send to %s: no core type", &ch)
return return
} }
uch, _ := u.(*Chan) uch, _ := u.(*Chan)
@ -626,14 +626,15 @@ func (check *Checker) stmt(ctxt stmtContext, s syntax.Stmt) {
case *syntax.ForStmt: case *syntax.ForStmt:
inner |= breakOk | continueOk inner |= breakOk | continueOk
check.openScope(s, "for")
defer check.closeScope()
if rclause, _ := s.Init.(*syntax.RangeClause); rclause != nil { if rclause, _ := s.Init.(*syntax.RangeClause); rclause != nil {
check.rangeStmt(inner, s, rclause) check.rangeStmt(inner, s, rclause)
break break
} }
check.openScope(s, "for")
defer check.closeScope()
check.simpleStmt(s.Init) check.simpleStmt(s.Init)
if s.Cond != nil { if s.Cond != nil {
var x operand var x operand
@ -809,8 +810,6 @@ func (check *Checker) typeSwitchStmt(inner stmtContext, s *syntax.SwitchStmt, gu
} }
func (check *Checker) rangeStmt(inner stmtContext, s *syntax.ForStmt, rclause *syntax.RangeClause) { func (check *Checker) rangeStmt(inner stmtContext, s *syntax.ForStmt, rclause *syntax.RangeClause) {
// scope already opened
// determine lhs, if any // determine lhs, if any
sKey := rclause.Lhs // possibly nil sKey := rclause.Lhs // possibly nil
var sValue, sExtra syntax.Expr var sValue, sExtra syntax.Expr
@ -835,9 +834,9 @@ func (check *Checker) rangeStmt(inner stmtContext, s *syntax.ForStmt, rclause *s
// determine key/value types // determine key/value types
var key, val Type var key, val Type
if x.mode != invalid { if x.mode != invalid {
// Ranging over a type parameter is permitted if it has a structural type. // Ranging over a type parameter is permitted if it has a core type.
var cause string var cause string
u := structuralType(x.typ) u := coreType(x.typ)
if t, _ := u.(*Chan); t != nil { if t, _ := u.(*Chan); t != nil {
if sValue != nil { if sValue != nil {
check.softErrorf(sValue, "range over %s permits only one iteration variable", &x) check.softErrorf(sValue, "range over %s permits only one iteration variable", &x)
@ -852,7 +851,7 @@ func (check *Checker) rangeStmt(inner stmtContext, s *syntax.ForStmt, rclause *s
// ok to continue // ok to continue
} }
if u == nil { if u == nil {
cause = check.sprintf("%s has no structural type", x.typ) cause = check.sprintf("%s has no core type", x.typ)
} }
} }
key, val = rangeKeyVal(u) key, val = rangeKeyVal(u)
@ -866,6 +865,11 @@ func (check *Checker) rangeStmt(inner stmtContext, s *syntax.ForStmt, rclause *s
} }
} }
// Open the for-statement block scope now, after the range clause.
// Iteration variables declared with := need to go in this scope (was issue #51437).
check.openScope(s, "range")
defer check.closeScope()
// check assignment to/declaration of iteration variables // check assignment to/declaration of iteration variables
// (irregular assignment, cannot easily map to existing assignment checks) // (irregular assignment, cannot easily map to existing assignment checks)
@ -874,9 +878,7 @@ func (check *Checker) rangeStmt(inner stmtContext, s *syntax.ForStmt, rclause *s
rhs := [2]Type{key, val} // key, val may be nil rhs := [2]Type{key, val} // key, val may be nil
if rclause.Def { if rclause.Def {
// short variable declaration; variable scope starts after the range clause // short variable declaration
// (the for loop opens a new scope, so variables on the lhs never redeclare
// previously declared variables)
var vars []*Var var vars []*Var
for i, lhs := range lhs { for i, lhs := range lhs {
if lhs == nil { if lhs == nil {
@ -913,12 +915,8 @@ func (check *Checker) rangeStmt(inner stmtContext, s *syntax.ForStmt, rclause *s
// declare variables // declare variables
if len(vars) > 0 { if len(vars) > 0 {
scopePos := syntax.EndPos(rclause.X) // TODO(gri) should this just be s.Body.Pos (spec clarification)? scopePos := s.Body.Pos()
for _, obj := range vars { for _, obj := range vars {
// spec: "The scope of a constant or variable identifier declared inside
// a function begins at the end of the ConstSpec or VarSpec (ShortVarDecl
// for short variable declarations) and ends at the end of the innermost
// containing block."
check.declare(check.scope, nil /* recordDef already called */, obj, scopePos) check.declare(check.scope, nil /* recordDef already called */, obj, scopePos)
} }
} else { } else {

View file

@ -21,6 +21,17 @@ func makeSubstMap(tpars []*TypeParam, targs []Type) substMap {
return proj return proj
} }
// makeRenameMap is like makeSubstMap, but creates a map used to rename type
// parameters in from with the type parameters in to.
func makeRenameMap(from, to []*TypeParam) substMap {
assert(len(from) == len(to))
proj := make(substMap, len(from))
for i, tpar := range from {
proj[tpar] = to[i]
}
return proj
}
func (m substMap) empty() bool { func (m substMap) empty() bool {
return len(m) == 0 return len(m) == 0
} }
@ -149,7 +160,10 @@ func (subst *subster) typ(typ Type) Type {
methods, mcopied := subst.funcList(t.methods) methods, mcopied := subst.funcList(t.methods)
embeddeds, ecopied := subst.typeList(t.embeddeds) embeddeds, ecopied := subst.typeList(t.embeddeds)
if mcopied || ecopied { if mcopied || ecopied {
iface := &Interface{embeddeds: embeddeds, implicit: t.implicit, complete: t.complete} iface := subst.check.newInterface()
iface.embeddeds = embeddeds
iface.implicit = t.implicit
iface.complete = t.complete
// If we've changed the interface type, we may need to replace its // If we've changed the interface type, we may need to replace its
// receiver if the receiver type is the original interface. Receivers of // receiver if the receiver type is the original interface. Receivers of
// *Named type are replaced during named type expansion. // *Named type are replaced during named type expansion.

View file

@ -92,15 +92,6 @@ func (xl termlist) norm() termlist {
return rl return rl
} }
// If the type set represented by xl is specified by a single (non-𝓤) term,
// singleType returns that type. Otherwise it returns nil.
func (xl termlist) singleType() Type {
if nl := xl.norm(); len(nl) == 1 {
return nl[0].typ // if nl.isAll() then typ is nil, which is ok
}
return nil
}
// union returns the union xl yl. // union returns the union xl yl.
func (xl termlist) union(yl termlist) termlist { func (xl termlist) union(yl termlist) termlist {
return append(xl, yl...).norm() return append(xl, yl...).norm()

View file

@ -106,35 +106,6 @@ func TestTermlistNorm(t *testing.T) {
} }
} }
func TestTermlistSingleType(t *testing.T) {
// helper to deal with nil types
tstring := func(typ Type) string {
if typ == nil {
return "nil"
}
return typ.String()
}
for test, want := range map[string]string{
"∅": "nil",
"𝓤": "nil",
"int": "int",
"myInt": "myInt",
"~int": "int",
"~int string": "nil",
"~int myInt": "int",
"∅ int": "int",
"∅ ~int": "int",
"∅ ~int string": "nil",
} {
xl := maketl(test)
got := tstring(xl.singleType())
if got != want {
t.Errorf("(%v).singleType() == %v; want %v", test, got, want)
}
}
}
func TestTermlistUnion(t *testing.T) { func TestTermlistUnion(t *testing.T) {
for _, test := range []struct { for _, test := range []struct {
xl, yl, want string xl, yl, want string

View file

@ -148,7 +148,7 @@ func _[
_ = make /* ERROR expects 2 or 3 arguments */ (S1) _ = make /* ERROR expects 2 or 3 arguments */ (S1)
_ = make(S1, 10, 20) _ = make(S1, 10, 20)
_ = make /* ERROR expects 2 or 3 arguments */ (S1, 10, 20, 30) _ = make /* ERROR expects 2 or 3 arguments */ (S1, 10, 20, 30)
_ = make(S2 /* ERROR cannot make S2: no structural type */ , 10) _ = make(S2 /* ERROR cannot make S2: no core type */ , 10)
type M0 map[string]int type M0 map[string]int
_ = make(map[string]int) _ = make(map[string]int)
@ -156,7 +156,7 @@ func _[
_ = make(M1) _ = make(M1)
_ = make(M1, 10) _ = make(M1, 10)
_ = make/* ERROR expects 1 or 2 arguments */(M1, 10, 20) _ = make/* ERROR expects 1 or 2 arguments */(M1, 10, 20)
_ = make(M2 /* ERROR cannot make M2: no structural type */ ) _ = make(M2 /* ERROR cannot make M2: no core type */ )
type C0 chan int type C0 chan int
_ = make(chan int) _ = make(chan int)
@ -164,7 +164,7 @@ func _[
_ = make(C1) _ = make(C1)
_ = make(C1, 10) _ = make(C1, 10)
_ = make/* ERROR expects 1 or 2 arguments */(C1, 10, 20) _ = make/* ERROR expects 1 or 2 arguments */(C1, 10, 20)
_ = make(C2 /* ERROR cannot make C2: no structural type */ ) _ = make(C2 /* ERROR cannot make C2: no core type */ )
_ = make(C3) _ = make(C3)
} }

View file

@ -15,9 +15,9 @@ func append1() {
var x int var x int
var s []byte var s []byte
_ = append() // ERROR not enough arguments _ = append() // ERROR not enough arguments
_ = append("foo" /* ERROR not a slice */ ) _ = append("foo" /* ERROR must be a slice */ )
_ = append(nil /* ERROR not a slice */ , s) _ = append(nil /* ERROR must be a slice */ , s)
_ = append(x /* ERROR not a slice */ , s) _ = append(x /* ERROR must be a slice */ , s)
_ = append(s) _ = append(s)
_ = append(s, nil...) _ = append(s, nil...)
append /* ERROR not used */ (s) append /* ERROR not used */ (s)
@ -77,7 +77,7 @@ func append3() {
_ = append(f2()) _ = append(f2())
_ = append(f3()) _ = append(f3())
_ = append(f5()) _ = append(f5())
_ = append(ff /* ERROR not a slice */ ()) // TODO(gri) better error message _ = append(ff /* ERROR must be a slice */ ()) // TODO(gri) better error message
} }
func cap1() { func cap1() {

View file

@ -8,21 +8,21 @@ import "strconv"
type any interface{} type any interface{}
func f0[A any, B interface{~*C}, C interface{~*D}, D interface{~*A}](A, B, C, D) {} func f0[A any, B interface{*C}, C interface{*D}, D interface{*A}](A, B, C, D) {}
func _() { func _() {
f := f0[string] f := f0[string]
f("a", nil, nil, nil) f("a", nil, nil, nil)
f0("a", nil, nil, nil) f0("a", nil, nil, nil)
} }
func f1[A any, B interface{~*A}](A, B) {} func f1[A any, B interface{*A}](A, B) {}
func _() { func _() {
f := f1[int] f := f1[int]
f(int(0), new(int)) f(int(0), new(int))
f1(int(0), new(int)) f1(int(0), new(int))
} }
func f2[A any, B interface{~[]A}](A, B) {} func f2[A any, B interface{[]A}](A, B) {}
func _() { func _() {
f := f2[byte] f := f2[byte]
f(byte(0), []byte{}) f(byte(0), []byte{})
@ -38,7 +38,7 @@ func _() {
// f3(x, &x, &x) // f3(x, &x, &x)
// } // }
func f4[A any, B interface{~[]C}, C interface{~*A}](A, B, C) {} func f4[A any, B interface{[]C}, C interface{*A}](A, B, C) {}
func _() { func _() {
f := f4[int] f := f4[int]
var x int var x int
@ -46,7 +46,7 @@ func _() {
f4(x, []*int{}, &x) f4(x, []*int{}, &x)
} }
func f5[A interface{~struct{b B; c C}}, B any, C interface{~*B}](x B) A { panic(0) } func f5[A interface{struct{b B; c C}}, B any, C interface{*B}](x B) A { panic(0) }
func _() { func _() {
x := f5(1.2) x := f5(1.2)
var _ float64 = x.b var _ float64 = x.b
@ -79,14 +79,14 @@ var _ = Double(MySlice{1})
type Setter[B any] interface { type Setter[B any] interface {
Set(string) Set(string)
~*B *B
} }
func FromStrings[T interface{}, PT Setter[T]](s []string) []T { func FromStrings[T interface{}, PT Setter[T]](s []string) []T {
result := make([]T, len(s)) result := make([]T, len(s))
for i, v := range s { for i, v := range s {
// The type of &result[i] is *T which is in the type list // The type of &result[i] is *T which is in the type list
// of Setter2, so we can convert it to PT. // of Setter, so we can convert it to PT.
p := PT(&result[i]) p := PT(&result[i])
// PT has a Set method. // PT has a Set method.
p.Set(v) p.Set(v)

View file

@ -14,7 +14,7 @@ func _() {
} }
// recursive inference // recursive inference
type Tr[A any, B ~*C, C ~*D, D ~*A] int type Tr[A any, B *C, C *D, D *A] int
func _() { func _() {
var x Tr[string] var x Tr[string]
var y Tr[string, ***string, **string, *string] var y Tr[string, ***string, **string, *string]
@ -25,11 +25,11 @@ func _() {
} }
// other patterns of inference // other patterns of inference
type To0[A any, B ~[]A] int type To0[A any, B []A] int
type To1[A any, B ~struct{a A}] int type To1[A any, B struct{a A}] int
type To2[A any, B ~[][]A] int type To2[A any, B [][]A] int
type To3[A any, B ~[3]*A] int type To3[A any, B [3]*A] int
type To4[A any, B any, C ~struct{a A; b B}] int type To4[A any, B any, C struct{a A; b B}] int
func _() { func _() {
var _ To0[int] var _ To0[int]
var _ To1[int] var _ To1[int]

View file

@ -134,11 +134,11 @@ func _[T interface{ ~string }] (x T, i, j, k int) { var _ T = x[i:j:k /* ERROR 3
type myByte1 []byte type myByte1 []byte
type myByte2 []byte type myByte2 []byte
func _[T interface{ []byte | myByte1 | myByte2 }] (x T, i, j, k int) { var _ T = x[i:j:k] } func _[T interface{ []byte | myByte1 | myByte2 }] (x T, i, j, k int) { var _ T = x[i:j:k] }
func _[T interface{ []byte | myByte1 | []int }] (x T, i, j, k int) { var _ T = x[ /* ERROR no structural type */ i:j:k] } func _[T interface{ []byte | myByte1 | []int }] (x T, i, j, k int) { var _ T = x[ /* ERROR no core type */ i:j:k] }
func _[T interface{ []byte | myByte1 | myByte2 | string }] (x T, i, j, k int) { var _ T = x[i:j] } func _[T interface{ []byte | myByte1 | myByte2 | string }] (x T, i, j, k int) { var _ T = x[i:j] }
func _[T interface{ []byte | myByte1 | myByte2 | string }] (x T, i, j, k int) { var _ T = x[i:j:k /* ERROR 3-index slice of string */ ] } func _[T interface{ []byte | myByte1 | myByte2 | string }] (x T, i, j, k int) { var _ T = x[i:j:k /* ERROR 3-index slice of string */ ] }
func _[T interface{ []byte | myByte1 | []int | string }] (x T, i, j, k int) { var _ T = x[ /* ERROR no structural type */ i:j] } func _[T interface{ []byte | myByte1 | []int | string }] (x T, i, j, k int) { var _ T = x[ /* ERROR no core type */ i:j] }
// len/cap built-ins // len/cap built-ins
@ -230,7 +230,7 @@ func _[
for _, _ = range s1 {} for _, _ = range s1 {}
var s2 S2 var s2 S2
for range s2 /* ERROR cannot range over s2.*no structural type */ {} for range s2 /* ERROR cannot range over s2.*no core type */ {}
var a0 []int var a0 []int
for range a0 {} for range a0 {}
@ -243,7 +243,7 @@ func _[
for _, _ = range a1 {} for _, _ = range a1 {}
var a2 A2 var a2 A2
for range a2 /* ERROR cannot range over a2.*no structural type */ {} for range a2 /* ERROR cannot range over a2.*no core type */ {}
var p0 *[10]int var p0 *[10]int
for range p0 {} for range p0 {}
@ -256,7 +256,7 @@ func _[
for _, _ = range p1 {} for _, _ = range p1 {}
var p2 P2 var p2 P2
for range p2 /* ERROR cannot range over p2.*no structural type */ {} for range p2 /* ERROR cannot range over p2.*no core type */ {}
var m0 map[string]int var m0 map[string]int
for range m0 {} for range m0 {}
@ -269,7 +269,7 @@ func _[
for _, _ = range m1 {} for _, _ = range m1 {}
var m2 M2 var m2 M2
for range m2 /* ERROR cannot range over m2.*no structural type */ {} for range m2 /* ERROR cannot range over m2.*no core type */ {}
} }
// type inference checks // type inference checks

View file

@ -78,7 +78,7 @@ func _() {
related1(si, "foo" /* ERROR cannot use "foo" */ ) related1(si, "foo" /* ERROR cannot use "foo" */ )
} }
func related2[Elem any, Slice interface{~[]Elem}](e Elem, s Slice) {} func related2[Elem any, Slice interface{[]Elem}](e Elem, s Slice) {}
func _() { func _() {
// related2 can be called with explicit instantiation. // related2 can be called with explicit instantiation.
@ -109,16 +109,8 @@ func _() {
related3[int, []int]() related3[int, []int]()
related3[byte, List[byte]]() related3[byte, List[byte]]()
// Alternatively, the 2nd type argument can be inferred // The 2nd type argument cannot be inferred from the first
// from the first one through constraint type inference. // one because there's two possible choices: []Elem and
related3[int]() // List[Elem].
related3[int]( /* ERROR cannot infer Slice */ )
// The inferred type is the structural type of the Slice
// type parameter.
var _ []int = related3[int]()
// It is not the defined parameterized type List.
type anotherList []float32
var _ anotherList = related3[float32]() // valid
var _ anotherList = related3 /* ERROR cannot use .* \(value of type List\[float32\]\) as anotherList */ [float32, List[float32]]()
} }

View file

@ -35,7 +35,7 @@ func (t T1[[ /* ERROR must be an identifier */ ]int]) m2() {}
// style. In m3 below, int is the name of the local receiver type parameter // style. In m3 below, int is the name of the local receiver type parameter
// and it shadows the predeclared identifier int which then cannot be used // and it shadows the predeclared identifier int which then cannot be used
// anymore as expected. // anymore as expected.
// This is no different from locally redelaring a predeclared identifier // This is no different from locally re-declaring a predeclared identifier
// and usually should be avoided. There are some notable exceptions; e.g., // and usually should be avoided. There are some notable exceptions; e.g.,
// sometimes it makes sense to use the identifier "copy" which happens to // sometimes it makes sense to use the identifier "copy" which happens to
// also be the name of a predeclared built-in function. // also be the name of a predeclared built-in function.

View file

@ -292,7 +292,7 @@ func _[T interface{~int|~float64}]() {
// It is possible to create composite literals of type parameter // It is possible to create composite literals of type parameter
// type as long as it's possible to create a composite literal // type as long as it's possible to create a composite literal
// of the structural type of the type parameter's constraint. // of the core type of the type parameter's constraint.
func _[P interface{ ~[]int }]() P { func _[P interface{ ~[]int }]() P {
return P{} return P{}
return P{1, 2, 3} return P{1, 2, 3}
@ -307,7 +307,7 @@ func _[P interface{ ~[]E }, E interface{ map[string]P } ]() P {
} }
// This is a degenerate case with a singleton type set, but we can create // This is a degenerate case with a singleton type set, but we can create
// composite literals even if the structural type is a defined type. // composite literals even if the core type is a defined type.
type MyInts []int type MyInts []int
func _[P MyInts]() P { func _[P MyInts]() P {

Some files were not shown because too many files have changed in this diff Show more