185K Followers 300 FollowingAn applied AI research company building for the next era of art, entertainment and human creativity. We're hiring: https://t.co/Aj11xyhxOg
338K Followers 0 FollowingNew research lab. Exploring new mediums of thought. Expanding the imaginative powers of the human species. Join our beta: https://t.co/yAUpCWJRzi
3.4M Followers 0 FollowingOpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity. We’re hiring: https://t.co/dJGr6LgzPA
943K Followers 275 FollowingWe’re a team of scientists, engineers, ethicists and more, committed to solving intelligence, to advance science and benefit humanity.
1.0M Followers 911 FollowingCo-Founder of Coursera; Stanford CS adjunct faculty. Former head of Baidu AI Group/Google Brain. #ai #machinelearning, #deeplearning #MOOCs
379K Followers 77 FollowingTensors and neural networks in Python with strong hardware acceleration. PyTorch is an open source project at the Linux Foundation. #PyTorchFoundation
23K Followers 447 FollowingFine mechanical watches, built by hand to last a lifetime, designed to look good any time. Welcome to our official Twitter page. #nomosglashuette
1K Followers 3K Followingwriter about town | now: @JoongAngDaily | words for Guardian, NPR, Foreign Policy et al | @aaja | she/her | email: maryranyang(at)gmail(dot)com
710K Followers 718 FollowingProfessor at NYU. Chief AI Scientist at Meta.
Researcher in AI, Machine Learning, Robotics, etc.
ACM Turing Award Laureate.
"A girl in a pink gown with jewelry against the backdrop of a mountain range during winter"
By Gencraft: the world's fastest (and free!) AI image and video generator
REAL-TIME object detection WITHOUT TRAINING
YOLO-World is a new SOTA open-vocabulary object detector that outperforms previous models in terms of both accuracy and speed. 35.4 AP with 52.0 FPS on V100.
↓ read more
I used to find writing CUDA code rather terrifying. But then I discovered a couple of tricks that actually make it quite accessible.
In this video I introduce CUDA in a way that will be accessible to Python programmers, and I even show how to do it all in @GoogleColab!
“Are Transformers Effective for Time Series Forecasting?” represents a pivotal paper, decisively highlighting the shortcomings and deficiencies in research surrounding the use of transformers for #timeseries#forecasting.
This paper effectively exposes the deceptive practices…
Today we’re releasing Code Llama 70B: a new, more performant version of our LLM for code generation — available under the same license as previous Code Llama models.
Download the models ➡️ bit.ly/3Oil6bQ
• CodeLlama-70B
• CodeLlama-70B-Python
• CodeLlama-70B-Instruct
RoMa: an easy-to-to-use, stable and efficient library to deal with rotations and spatial transformations in PyTorch.
Read all about this PyTorch Ecosystem Tool in our latest Medium post ⚡hubs.la/Q02hFsVf0
Friends don't let friends make bad charts!
Chenxin Li, pulled together a lot of great advice for data visualization, with clear "do this, not that" examples for each item.
Here are a few of my favorites, see the link below for more.
Stanford CS25 - Transformers United
Great set of lectures on advanced applications and topics related to Transformers.
My favorite lectures are the ones on common sense reasoning and generalist agents in open-ended worlds.
Some of the lectures are a bit old now but the…
Did you know that with 2 Python libraries, 6 lines of code and around 15 seconds, you can load satellite data from anywhere in the world?
This is so much easier than it used to be!
The electronic version of the new fourth edition of my book Linear Algebra Done Right has now been downloaded over 80 thousand times in the two months since it was released. The electronic version of this Open Access book is freely available at linear.axler.net.
The print…
Here are 300 hours of curated courses focused on Machine Learning Engineering.
There are 15 courses. From beginner to advanced. From Google. For free.
Some of the topics they cover:
• Fundamentals of Machine Learning
• Feature Engineering
• Production Machine Learning…
Phi-2 is a damn good model! where are all the Phi-2 finetunes?
Here's code and a guide on how to finetune Phi-2 using QLoRA
+ includes a part on creating a dataset from a seed of instructions
I’m still using a 7B model in production, mainly because it just works well for these tasks and it’s cheap to run. Why would I pay for gpt-4 exactly when I can apply a Lora over a 7B model for a fraction of the cost
Claude 2 is really good indeed.
GPT-4's ability to recognize and describe images is impressive. However, Claude 2's capability to work with PDFs might be an even more significant productivity boost.
Maybe it's not one vs the other but about using both (for different things).
If you haven't tried Claude yet it's a absolutely worth spending time with - I lean on it a lot for working with longer documents, since it can handle 100,000 tokens (GPT-4 is only 8,000) at a time
Plus you can upload PDFs to it - I've used it with 100+ page documents
An interesting and enjoyable read from Léon Bottou and Bernhardt Schölkopf. It suggests different analogies and metaphors for framing what's going on with large language models through the imagery of Jorge Luis Borge, e.g: Fiction Machines & Vindications
arxiv.org/abs/2310.01425